source
stringlengths 26
381
| text
stringlengths 53
1.64M
|
|---|---|
https://www.databricks.com/company/careers/open-positions?department=sales
|
Current job openings at Databricks | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOW OverviewCultureBenefitsDiversityStudents & new gradsCurrent job openings at DatabricksDepartmentLocationProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/jp/trust?itm_data=menu-item-securitytrustcenter
|
セキュリティ&トラストセンター | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT
さようなら、データウェアハウス。こんにちは、レイクハウス。
データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。
今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)
製造業のためのレイクハウスを発見する
コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日
直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOWSecurity & Trust CenterYour data security is our priority
OverviewTrustSecurity FeaturesArchitectureCompliancePrivacyReport an IssueDatabricks では、お客さまにとっての貴重な資産であるデータが常に保護されるよう、レイクハウスプラットフォームの全レイヤーにセキュリティを組み込んでいます。また、プラットフォーム利用におけるコンプライアンスを確実にし、お客さまに対する透明性を維持することで、お客さまに安心して利用していただけるサービスの提供に努めています。Databricks では、お客さまにとっての貴重な資産であるデータが常に保護されるよう、レイクハウスプラットフォームの全レイヤーにセキュリティを組み込んでいます。また、プラットフォーム利用におけるコンプライアンスを確実にし、お客さまに対する透明性を維持することで、お客さまに安心して利用していただ けるサービスの提供に努めています。トラストDatabricks は透明性を重視し、お客さまとの信頼を築いています。ペネトレーションテスト、脆弱性の管理、セキュアなソフトウェア開発など、業界トップレベルのベストプラクティスを通じて、レイクハウスプラットフォームをセキュアなものにしています。詳しく見るセキュリティ機能Databricks は、暗号化、ネットワーク制御、監査、ID の統合、アクセス制御、データガバナンスなど、お客さまのデータとワークロードを保護する包括的なセキュリティ機能を提供します。詳しく見るコンプライアンスDatabricks のレイクハウスプラットフォームは、世界中のあらゆる業界のお客さまに信頼されています。Databricks では、規制の厳しい業界特有のコンプライアンス要件を満たす認証や証明書を保持しています。詳しく見るプライバシーDatabricks は、お客さまのデータのプライバシーを尊重し、それがお客さまの組織および顧客にとって重要であることを理解しています。お客さまにおける個人情報保護法の遵守や、規制要件を満たすサポートをします。詳しく見るDatabricks のセキュリティをお客さまにセルフサービスでレビューしていただけるよう、コンプライアンス関連資料をご用意しています。デューデリジェンスパッケージ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ 採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利
|
https://www.databricks.com/legal/applicant-privacy-notice
|
Applicant Privacy Notice | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesApplicant Privacy NoticeEffective Date: January 3, 2023This Applicant Privacy Notice describes how Databricks, Inc., its affiliates and subsidiaries (collectively, “Databricks,” “we,” “us,” and “our”) collect, use and share personal information during the application and recruitment process. It also contains information about your choices and privacy rightsInformation We Collect About YouWe may collect the following personal information during the applicant recruitment process:Contact Information. We will collect basic contact information such as your full name, email address, and phone number.Work History. As part of the application process we may collect a resume, CV, cover letter, and/or job application which may include information about your education, experiences, and job history.Educational Information. We may collect information including degrees awarded, transcripts, and other information in support of the job application.Online Resources. We may collect publicly available information from online sites, such as information contained within your LinkedIn profile, Facebook page, or other personal websites.Diversity and Inclusion. Where allowed or required by law we may ask you to provide information such as ethnicity, gender identity, veteran status, or any disability or accommodation request during the recruitment process.Candidate Assessments and Presentations. For certain roles, we may collect information from an assessment of your skills. You may also be asked to perform a technical exercise or do a presentation during the recruitment process, which may be recorded and shared with the Databricks internal teams responsible for the hiring decision.Reference Information. We may collect reference information where applicable, including information provided by third parties.Information from Third Parties. We may collect information about you from recruitment agencies who presented you, recruiting sites, Databricks employees who referred you, referees, and background checking companies that are used to verify the information you have provided during the application and recruitment process. We may combine such information with the information we receive or collect from you.Background Check. We may, as permitted by law, choose to conduct a background check, which may include verification of your education, job history, references, and criminal records history.Other information. We also collect any additional information that you choose to include in your application materials or share with us during the application and recruitment process including information collected during phone screens, email/messaging exchanges and interviews.Purposes of Collection and UseWe may process your personal information for the following legitimate business purposes:We collect and use your contact information to keep in contact with you during the recruitment process, including through our applicant tracking system, and to contact you about other open roles that may be of interest to you.We review your resume/CV and any information collected from online resources to assess your skills and experience against the position for which you have applied.If you are asked to complete an assessment or provide a presentation during the recruitment process, we will use that information to match your skills against the position for which you applied.If you are successfully moved forward in the recruitment process, we may use your information to perform background checks, education verification, and reference checks, as permitted by local law.If you are hired by Databricks your personal information will be transferred to your employee file.If you are not successful in the recruitment process, we may hold onto the information collected during the recruitment process, including your personal information, so we can reach out to you if other roles become available that match your skill set. If you would prefer that we remove your information from our systems, please let us know by emailing [email protected].Any data you submit to a Third Party Service will be subject to the terms of service and privacy policy of such Third Party Service. We may also link to co-branded websites or products that are maintained by Databricks and one or more of our business partners. Please note that these co-branded websites and products may have their own privacy policy, which we encourage you to read and understand. When you click a link to a third-party site, you will leave our site.We may process personal information to comply with our legal, regulatory, or other corporate government requirements.We may also process your personal information to analyze and improve our application and recruitment process.How We Share Your InformationWe may share your personal information with the following types of third parties:Within the Databricks group of affiliated companies. We may share your personal information within the Databricks group of affiliated companies to facilitate the application and recruitment process.Service Providers. We may engage service providers to perform certain functions related to the application and recruitment process, including companies that help us run our internal applicant tracking system, perform background checks, or perform audits. All third parties that act on our behalf in the application and recruitment process are subject to confidentiality provisions and the requirements that they will only use your personal information to provide the services.Legal Requirements. We may disclose your personal information when authorized by law or as necessary to comply with a legal process.Protection of Rights. We may disclose your personal information when required to protect and defend the rights and property of Databricks and its group of affiliated companies, our third-party vendors, and other users of any websites used in the applicant and recruitment process. Legal basis for processingConsent. In certain cases, we ask you for your consent to process your personal information. You can withdraw your consent at any time by emailing [email protected].Performance of a Contract. We collect and use your personal information where it is necessary to take steps, at your request, prior to making a job offer, completing a contract of employment with you, or to perform our obligations under an agreement we have with you. For example, we use your personal information to make you a job offer or complete a contract of employment with you.Legitimate Interest. We may process your personal information for legitimate interests including, for example, to conduct our recruitment processes efficiently or to manage applicants effectively.Other Legal Bases. In some cases, we may have a legal obligation to process your personal information. We also may need to process your personal information to protect vital interests, or to exercise, establish, or defend legal claims.International Transfers Databricks may transfer your personal information to countries other than your country of residence. In particular, we may transfer your personal information to the United States and other countries where our affiliates, business partners and services providers are located. These countries may not have equivalent data protection laws to the country where you reside. Wherever we process your personal information, we take appropriate steps to ensure it is protected in accordance with this Privacy Notice and applicable data protection laws. These safeguards include implementing the European Commission’s Standard Contractual Clauses for transfers of personal information from the EEA or Switzerland between us and our business partners and service providers, and equivalent measures for transfers of personal information from the United Kingdom. Where necessary, we also make use of supplementary measures to ensure your information is adequately protected. Data RetentionWe will retain your personal information for as long as necessary to fulfill the purposes for which it was collected, resolve disputes, enforce our agreements, and comply with any laws or regulations. Databricks may retain your personal information longer with your consent or for a legitimate business interest where we have assessed the business benefit and confirmed that such business benefit is not outweighed by your personal rights and freedoms.Your Privacy RightsDepending upon your residence, you may have certain rights in relation to your personal information. Depending on applicable data protection laws, these may include your right to request access to or to update your information, request that it be deleted, or object to Databricks using it for certain purposes. If you wish to exercise any of your rights under applicable data protection laws, submit a request online by completing the request form here or emailing us at [email protected]. Depending on your location, you may also raise any concerns you have regarding our collection and use of your personal information with your local data protection authority. California residents, please see below under “Additional Information for California Residents” for information about rights that may be applicable to you.Security We use technical and organizational measures that provide a level of security appropriate to the risk of processing your personal information. Please be aware, however, that no method of transmitting information over the Internet or storing information is completely secure.Changes to this Privacy NoticeWe may update this policy from time to time by posting a new version on this page. We encourage you to periodically review this page for the latest information on our privacy practices.How to contact usIf you have any questions about this Applicant Privacy Notice or wish to exercise your rights in your personal information, please contact us at [email protected] or at the mailing addresses below. We will respond to your request within a reasonable time or as otherwise required by law.Databricks Inc.
Attn: Privacy
160 Spear Street
Suite 1300
San Francisco, California 94105
United StatesAdditional Information for California ResidentsWe provide the following information to our applicants or candidates who are California residents, pursuant to the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). For a description of all of our data collection, use and disclosure practices, please read this Privacy Notice in its entirety.Categories of Personal InformationBelow, we describe the categories of Personal Information (as defined in the CCPA) that we have collected within the preceding 12 months and may collect about applicants or candidates on a going-forward basis:IdentifiersPersonal information categories listed in the California Customer Records statute (Cal. Civ. Code § 1798.80(e)) (e.g., name, address, email address)Characteristics of protected classifications under California or federal lawInternet or other similar network activityGeolocation data (e.g., general location derived from an IP address)Sensory data (e.g., audio, electronic, visual or similar information including candidate assessments and presentations)Professional or employment-related informationEducation informationInferences drawn from the preceding categories of Personal InformationWithin these categories of Personal Information, we may also collect the following categories of Sensitive Personal Information (as defined in the CCPA): Account log-in in combination with any required security or access codes, password, or credentials allowing access to the accountSocial security, driver’s license, state identification card, or passport numberRacial or ethnic origin, for diversity purposes and consistent with applicable lawHealth-related information, to the extent necessary to provide necessary accommodationsCategories of Recipients of Personal InformationFor each category of Personal Information (including Sensitive Personal Information) listed above, we disclose this information as described above in the “How we share your information” section of the Applicant Privacy Notice.Sources of Personal InformationWe obtain the categories of Personal Information listed above directly from you or from devices on which you use our Sites, as well as described above in the “Information we Collect About You” section of the Applicant Privacy Notice.Purposes for Which We Use Personal InformationWe use Personal Information for a variety of business and commercial purposes, as described above in the “Purposes of Collection and Use“ section of our Applicant Privacy Notice.Retention Period CriteriaDatabricks retains the personal information as described above in the “Data Retention“ section of this Applicant Privacy Notice.CCPA Rights and Requests To the extent provided by the CCPA, you may make the following types of requests under the CCPA with respect to Personal Information that we process on our own behalf.Requests to Know, Correct, and Delete: You may request: Access to a copy of the specific pieces of Personal Information that we have collected about you;Correction of Personal Information that we maintain about you, if it is inaccurate; and/orDeletion of Personal Information that we maintain about you, subject to certain exceptionsYou may submit such requests by completing the request form here or emailing us at [email protected]. We will respond to your request consistent with the CCPA, and subject to any exceptions that may apply under the CCPA. We may need to request additional Personal Information from you, such as email address, state of residency, or mailing address, in order to verify your identity and protect against fraudulent requests.Authorized AgentsIf you wish to use an authorized agent to make a CCPA request on your behalf, please direct the authorized agent to use the submission methods noted above. As part of our verification process, we may request that the authorized agent provides, as applicable, proof concerning status as an authorized agent. We may also require you to verify your own identity directly with us; or directly confirm with us that you provided permission to submit the request on your behalf.NondiscriminationYou have the right to be free from unlawful discriminatory treatment for exercising any of your CCPA rights.Please contact us at [email protected] if you have any additional questions.ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/blog/2021/07/23/augment-your-siem-for-cybersecurity-at-cloud-scale.html
|
How to Augment Your SIEM for Cybersecurity at Cloud Scale - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorAugment Your SIEM for Cybersecurity at Cloud Scaleby Michael Ortega and Monzy MerzaJuly 23, 2021 in Platform BlogShare this postOver the last decade, security incident and event management tools (SIEMs) have become a standard in enterprise security operations. SIEMs have always had their detractors. But the explosion of cloud footprints is prompting the question, are SIEMs the right strategy in the cloud-scale world? Security leaders from HSBC don’t think so. In a recent talk, Empower Splunk and Other SIEMs with the Databricks Lakehouse for Cybersecurity, HSBC highlighted the limitations of legacy SIEMs and how the Databricks Lakehouse Platform is transforming cyberdefense. With $3 trillion in assets, HSBC’s talk warrants some exploration.In this blog post, we will discuss the changing IT and cyber-attack threat landscape, the benefits of SIEMs, the merits of the Databricks Lakehouse and why SIEM + Lakehouse is becoming the new strategy for security operations teams. Of course, we will talk about my favorite SIEM! But I warn you, this isn’t a post about critiquing “legacy technologies built for an on-prem world.” This post is about how security operations teams can arm themselves to best defend their enterprises against advanced persistent threats.The enterprise tech footprintSome call it cloud-first and others call it cloud-smart. Either way, it is generally accepted that every organization is involved in some sort of cloud transformation or evaluation -- even in the public sector, where onboarding technology isn’t a light decision. As a result, the main US cloud service providers all rank within the top 5 largest market cap companies in the world. As tech footprints are migrating to the cloud, so are the requirements for cybersecurity teams. Detection, investigation and threat hunting practices are all challenged by the complexity of the new footprints, as well as the massive volumes of data. According to IBM, it takes 280 days on average to detect and contain a security breach. According to HSBC’s talk at Data + AI Summit, 280 days would mean over a petabyte of data -- just for network and EDR (endpoint threat detection and response) data sources.When an organization needs this much data for detection and response, what are they to do? Many enterprises want to keep the cloud data in the cloud. But what about from one cloud to the other? I spoke to one large financial services institution this week who said, “We pay over a $1 million in egress cost to our cloud provider.” Why? Because their current SIEM tool is on one cloud service and their largest data producers are on another. Their SIEM isn’t multi-cloud. And over the years, they have built complicated transport pipelines to get data from one cloud provider to the other. Complications like this have warped their expectations from technology. For example, they consider 5-minute delays in data to be real time. I present this here as a reality of what modern enterprises are confronted with -- I am sure the group I spoke with is not the only one with this complication.Security analytics in the cloud worldThe cloud terrain is really messing with every security operations team’s m.o. What was called big data 10 years ago is puny data by today’s cloud standards. With the scale of today’s network traffic, gigabytes are now petabytes, and what used to take months to generate now happens in hours. The stacks are new and security teams are having to learn them. Mundane tasks like, “have we seen these IPs before” are turning into hours or days-long searches in SIEM and logging tools. Slightly more sophisticated contextualization tasks, like adding the user’s name to network events, are turning into near impossible ordeals. And if one wants to do streaming enrichments of external threat intelligence at terabytes of data per day -- good luck -- hope you have a small army and a deep pocket. And we haven’t even gotten to anomaly detection or threat hunting use cases. This is by no means a jab at SEIMs. In reality, the terrain has changed and it’s time to adapt. Security teams need the best tools for the job.What capabilities do security teams need in the cloud world? First and foremost, an open platform that can be integrated with the IT and security tool chains and does not require you to provide your data to a proprietary data store. Another critical factor is a multi-cloud platform, so it can run on the clouds (plural) of your choice. Additionally, a scalable and highly-performant analytics platform, where compute and storage are decoupled that can support end-to-end streaming AND batch processing. And finally, a unified platform to empower data scientists, data engineers, SOC analysts and business analysts -- all data people. These are the capabilities of the Databricks Lakehouse Platform.The SaaS and auto-scaling capabilities of Databricks simplify the use of these sophisticated capabilities. Databricks security customers are crunching across petabytes of data in sub ten minutes. One customer is able to collect from 15+ million endpoints and analyze the threat indicators in under an hour. A global oil and gas producer, paranoid about ransomware, runs multiple analytics and contextualizes every single powershell execution in their environment -- analysts only see high confidence alerts.Lakehouse + SIEM : The pattern for cloud-scale security operationsGeorge Webster, Head of Cybersecurity Sciences and Analytics at HSBC, describes the Lakehouse + SIEM is THE pattern for security operations. It leverages the strengths of the two components: a lakehouse architecture for multicloud-native storage and analytics, and SIEM for security operations workflows. For Databricks customers, there are two general patterns for this integration. But they are both underpinned by what Webster calls, The Cybersecurity Data Lake with Lakehouse.The first pattern: The lakehouse stores all the data for the maximum retention period. A subset of the data is sent to the SIEM and stored for a fraction of the time. This pattern has the advantage that analysts can query near-term data using the SIEM while having the ability to do historical analysis and more sophisticated analytics in Databricks. And manage any licensing or storage costs for the SIEM deployment.The second pattern is to send the highest volume data sources to Databricks, (e.g. cloud native logs, endpoint threat detection and response logs, DNS data and network events). Comparatively low volume data sources go to the SIEM, (e.g. alerts, email logs and vulnerability scan data). This pattern enables Tier 1 analysts to quickly handle high-priority alerts in the SIEM. Threat hunt teams and investigators can leverage the advanced analytical capabilities of Databricks. This pattern has a cost benefit of offloading processing, ingestion and storage off of the SIEM.Integrating the Lakehouse with SplunkWhat would a working example look like? Because of customer demand, the Databricks Cybersecurity SME team created the Databricks add-on for Splunk. The add-on allows security analysts to run Databricks queries and notebooks from Splunk and receive the results back into Splunk. A companion Databricks notebook enables Databricks to query Splunk, get Splunk results and forward events and results to Splunk from Databricks.With these two capabilities, analysts on the Splunk search bar can interact with Databricks without leaving the Splunk UI. And Splunk search builders or dashboards can include Databricks as part of their searches. But what's most exciting is that security teams can create bi-directional, analytical automation pipelines between Splunk and Databricks. For example, if there is an alert in Splunk, Splunk can automatically search Databricks for related events, and then add the results to an alerts index or a dashboard or a subsequent search. Or conversely, a Databricks notebook code block can query Splunk and use the results as inputs to subsequent code blocks.With this reference architecture, organizations can maintain their current processes and procedures, while modernizing their infrastructure, and become multi-cloud native to meet the cybersecurity risks of their expanding digital footprints.Achieving scale, speed, security and collaborationSince partnering with Databricks, HSBC has reduced costs, accelerated threat detection and response, and improved their security posture. Not only can the financial institution process all of their required data, but they've increased online query retention from just days to many months at the PB scale. The gap between an attacker's speed and HSBC's ability to detect malicious activity and conduct an investigation is closing. By performing advanced analytics at the pace and speed of adversaries, HSBC is closer to their goal of moving faster than bad actors.As a result of data retention capabilities, the scope of HSBC threat hunts has expanded considerably. HSBC is now able to execute 2-3x more threat hunts per analyst, without the limitations of hardware. Through Databricks notebooks, hunts are reusable and self-documenting, which keeps historical data intact for future hunts. This information, as well as investigation and threat hunting life cycles, can now be shared between HSBC teams to iterate and automate threat detection. With efficiency, speed and machine learning/artificial intelligence innovation now available, HSBC is able to streamline costs, reallocate resources, and better protect their business-critical data.What's nextWatch Empower Splunk and Other SIEMs with the Databricks Lakehouse for Cybersecurity to hear directly from HSBC and Databricks about how they are addressing their cybersecurity requirements.Learn more about the Databricks add-on for Splunk.References Market caps: https://www.visualcapitalist.com/the-biggest-companies-in-the-world-in-2021/Breach lifecycle: https://www.ibm.com/security/digital-assets/cost-data-breach-report/#/Try Databricks for freeGet StartedSee all Platform Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/pieter-noordhuis
|
Pieter Noordhuis - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingPieter NoordhuisSr. Staff Software Engineer at DatabricksBack to speakersSoftware Engineer at Databricks working on developer toolingLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/discover/demos/delta-lake
|
Delta Lake Demo: Reliable Data Lakes at Scale | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDelta Lake on Databricks DemoGet started for freeDownload Demo NotebookWith Delta Lake on Databricks, you can build a lakehouse architecture that combines the best parts of data lakes and data warehouses on a simple and open platform that stores and manages all of your data and supports all of your analytics and AI use cases.In this demo, we cover the main features of Delta Lake, including unified batch and streaming data processing, schema enforcement and evolution, time travel, and support for UPDATEs/MERGEs/DELETEs, as well as touching on some of the performance enhancements available with Delta Lake on Databricks.See full list of demosDive deeper into Delta LakeLearn moreLearn moreLean moreVideo transcriptDelta Lake Demo: Introduction
The lakehouse is a simple and open data platform for storing and managing all of your data, that supports all of your analytics and AI use cases. Delta Lake provides the open, reliable, performant and secure foundation for the lakehouse.
It’s an open-source data format and transactional data management system, based on Parquet, that makes your data lake reliable by implementing ACID transactions on top of cloud object storage. Delta Lake tables unify batch and streaming data processing right out of the box. And finally, Delta Lake is designed to be 100% compatible with Apache SparkTM. So it’s easy to convert your existing data pipelines to begin using Delta Lake with minimal changes to your code.Collapse full transcriptExpand full transcriptCreate your Databricks account1/2First nameLast NameEmailCompanyTitlePhone (Optional)SelectCountryGet started for freeReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/p/webinar/building-machine-learning-platforms?itm_data=lakehouse-link-buildingmlplatforms
|
Building Machine Learning Platforms | DatabricksBuilding Machine Learning PlatformsHear Matei Zaharia’s keynote on MLOps and ML Platforms State of the Industry — now available on-demand. Learn about managing the end-to-end ML lifecycle at scale from Databricks’ chief technology officer and co-founder. Topics covered:Product demos: How to operationalize data science and ML on Databricks with MLflowReal-world use cases: Analytics experts at Blue Cross, Atlassian, Quby and OutreachQ&A panel: Hosted by Ben Lorica, Chief Data Scientist, DatabricksFeatured SpeakersMatei ZahariaCo-founder and Chief Technology OfficerDatabricksBen LoricaChief Data ScientistDatabricksClemens MewaldDirector, Product Management, Data Science and MLDatabricksWatch NowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/kr/product/google-cloud?itm_data=menu-item-gcpProduct
|
Databricks Google Cloud Platform(GCP) | DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시)
안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다.
데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오.
지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)
제조업을 위한 레이크하우스 살펴보기
코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일
직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOWGoogle Cloud 기반 Databricks개방형 레이크하우스 플랫폼과 개방형 클라우드를 결합하여 통합 데이터 엔지니어링, 데이터 사이언스 및 분석 통합시작하기Google Cloud 기반 Databricks는 공동 개발된 서비스로, 이를 사용하면 데이터 웨어하우스와 데이터 레이크의 장점을 결합한 간단한 개방형 레이크하우스 플랫폼에 모든 데이터를 저장하여 관리하고, 모든 분석 및 AI 워크로드를 통합할 수 있습니다. Databricks는 Google Cloud Storage, BigQuery 및 Google Cloud AI Platform과 긴밀히 통합되어 Google Cloud에서 데이터와 AI 서비스로 매끄럽게 작업할 수 있습니다.
신뢰할 수 있는 데이터 엔지니어링모든 데이터에서 SQL 분석 실행협업형 데이터 사이언스프로덕션 머신 러닝
Google Cloud 기반 Databricks를 사용해야 하는 이유오픈개방형 표준, 개방형 API 및 개방형 인프라에 기반하여 원하는 대로 데이터에 액세스하고, 이를 처리 및 분석할 수 있습니다.최적화Google Kubernetes Engine에 Databricks를 배포하면 모든 클라우드에서 실행되는 최초의 Kubernetes 기반 Databricks 런타임으로써, 더욱 빠르게 인사이트를 얻을 수 있습니다.통합Google Cloud Console에서 클릭 한 번으로 Databricks에 액세스하고, 통합 보안, 청구 및 관리를 활용할 수 있습니다.고객 사례 자세히 알아보기
"Google Cloud 기반 Databricks는 확장 가능한 컴퓨팅 플랫폼에서 다양한 사용 사례를 적용하는 과정을 단순화하여, Databricks에서 사용하는 각 비즈니스 질문이 나 문제 설정에 대한 솔루션을 제공하는 데 필요한 계획 사이클을 단축합니다."—Harish Kumar, Reckitt 글로벌 데이터 사이언스 이사Google Cloud와의 간소화된 통합Google Cloud StorageGoogle Cloud StorageGoogle Cloud Storage(GCS)의 데이터에 매끄러운 읽기/쓰기 액세스를 지원하며, Databricks 내에서 Delta Lake 오픈 형식을 활용하여 강력한 안정성 및 성능 기능을 추가합니다.Google Kubernetes EngineBigQueryBig Cloud IdentityGoogle Cloud AI PlatformGoogle Cloud BillingLooker파트너 에코시스템리소스가상 이벤트가상 워크숍: 개방형 데이터 레이크뉴스Databricks, Google Cloud와 협력하여 글로벌 기업에 플랫폼 제공블로그 및 보고서Google Cloud 기반 Databricks 출시 공지Google Cloud 기반 Databricks 소개 – 최신 공개 미리 보기Google Cloud 기반 Databricks 데이터시트Google Cloud 기반 Databricks 정식 출시Google Cloud 기반 Databricks를 사용한 데이터 엔지니어링, 데이터 사이언스 및 분석시작할 준비가 되셨나요?Databricks 무료로 시작하기제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks
채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
|
https://www.databricks.com/wp-content/uploads/2022/09/fy-22-databricks-modern-slavery-policy-statement.pdf
|
%PDF-1.4
%����
275 0 obj
<>
endobj
xref
275 20
0000000016 00000 n
0000001268 00000 n
0000001427 00000 n
0000003863 00000 n
0000004330 00000 n
0000004444 00000 n
0000004706 00000 n
0000005248 00000 n
0000005829 00000 n
0000006345 00000 n
0000006914 00000 n
0000007388 00000 n
0000007945 00000 n
0000008504 00000 n
0000009100 00000 n
0000009551 00000 n
0000018528 00000 n
0000049924 00000 n
0000001088 00000 n
0000000696 00000 n
trailer
<]/Prev 159722/XRefStm 1088>>
startxref
0
%%EOF
294 0 obj
<>stream
h�b```b``�f`a`�*``@ 6 da�x�P�X�5�a
��\ѯ�@�Z�b*�[���͵�Zreq�7�0Ο�'r��HI'Z���
�2w�kc0�Ж�d��SnF��dM�uX�y��/G��m�S���ZƲ8�����` d3)���xP)%u����XZ:��p&���N_�4�6X�$��5�L'�.8,�f�ڤ��� �Ay���>[�>��WZ���x�;�Q��c� � ����M���' M9 � ӡ�!�وe
s<���B�43�\Ԁ�)w
ș@�
� �r]�
endstream
endobj
293 0 obj
<>/Filter/FlateDecode/Index[49 226]/Length 30/Size 275/Type/XRef/W[1 1 1]>>stream
h�bb�f`b``Ń3�
�� ��
endstream
endobj
276 0 obj
<>/Metadata 47 0 R/Pages 46 0 R/StructTreeRoot 49 0 R/Type/Catalog/ViewerPreferences<>>>
endobj
277 0 obj
<>/Font<>/ProcSet[/PDF/Text]/Properties<>>>/Rotate 0/StructParents 0/TrimBox[0.0 0.0 612.0 792.0]/Type/Page/PieceInfo</LastModified/NumberOfPageItemsInPage 16/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<>/PageTransformationMatrixList<>/PageUIDList<>/PageWidthList<>>>>>>>
endobj
278 0 obj
<>
endobj
279 0 obj
<>
endobj
280 0 obj
<>
endobj
281 0 obj
<>stream
H�\�An�@E�����"��DB�;���$� Cۃ4���~��D$���*^u��v�w��?�����]��p�� �N]�-�v���4�7�z��T��]�p���!�*��L//S���u;�}���6Į?��ߛ�����q�Ρ��V�
���G=������aצ��t{H5�+~�����yI�fh�e�����j����^ӵ�B���^�e�c�YU`�b�n��l�
yޒ��K�/��\����\�Kr ^���Wr����Za��VX+�!X�
���S<كɏ�'��y&?��G��{�Q^�/`:��{W�]�pV:+���
g���Y�pV:+���
Oc&�L��
���
���
���
���
���
���21~��o1C�9s0�`����1C�9x�����������������������n8��'G6M�����c�y�������5��0�T�_�O� ����
endstream
endobj
282 0 obj
<>stream
H�|S�jA}������Ύ4w�: �%� �Ї���M ���^���6v�ށ����9#i�O���tܭ���ᱶ���IL*�yj�As;~[�Z�����
�56e�XWd/�������짲`@�a5QSu3��N�nNjT�E�0�����5�n �B�%Pmk�k���Xv:x� 0-/61C(���#�s�X
�:��&3�J�qP-t�*i�P�hkTc�E�kr��=x�u1�3�S�Uݴ�/�;�GqGܝ&/4�wq/���z�E�t����y#Q��u�Y��%&0�|��,��=}�-E��e���R`�␑
��E.t��������P�}n�y����䋉[&�<�RL.���������|��Zq��Q+�E�+�]b\B����w�f��MI"�ު���i����ө���
� 5y2���n����8��PV���oM]���?P�&�N�^*'o4ٜ��/w��͞
����cC�ԛN���2F11��? �d��
endstream
endobj
283 0 obj
<>stream
H��T�n1��)�v♱=�TU"�[+��C����R%
Ry{f�]�.��VZ�����g�-D{8GL�|:>_{���՛^��&| �6\ݼ��z��}��pq�Z����cj��" ��Ւ�#gH5���p�z���O��py�~g8v:�ڋv��c$vA�D
�*r�}2
g 8ce晀�%'�������ģI'�"G6���
���/�_�*�]9J�f'$����X�.��n��v�`�a�/edfTi�k�'�D���bĘ�i>�c�+I1��ޡر��2WcX|Z��
zqv'���W��6�$�Z�۸ʡ��*�J�[����
��X�+j�szi���9��j4ܓg]��V�ɹ��+Je"V��y�ws����ҲUЮFsBQ�*�Ƈ�լ)=��8vm]ռ�k���'�3OR�����`�f��G��w�S� #'�
endstream
endobj
284 0 obj
<>stream
H��R�n1}�W��;N�HU�n+(<��xXM�eE�t����qf'������s�˱O��כ���8?_]]�#�؎���wO`?�#�1��1)a`��*T�P4g$I�,E�?��MF7�nz�~x�<5���܍{������aw�pv֯V�?�cʘ#t�זXQ���X����E�U㕹�ɿ�KZ��9�e�ٺdo�c�uȌK��$�3��5y1��vkX�R0�(�|����1Y�d�wd�lh�j��#<�K8*�BGx��p�(Fi�'{��U�XUѱ�ɍ�-"
j:�u�Sd.XR���B�['"H��S�s��)�$�m,��bݞ�xb?G�5_ƒu9��6:�jt�g��Z��\���t�~������~[U��|u��}��������� �z�1��=�*�ѓ�M��m~&���}�o����sl'|Y�|D�d���DO���k��`0ё
��#(��M��"K������ R(��
endstream
endobj
285 0 obj
<>stream
H���Ar!E���h (�1r��Je1���7A�{z�H*���V�/�J��#��D ��v�J����EF����?Q�%j�6��y8�)+��)F�K�C���Zm��^��Uu%��Ec��1�1�X��z�u0 ƿ\}�C�Rv;��]o�ĭw�|����j d�[x����(��L�R�#t¹h?/�ꖼ�N����F/;c�W�0=}��z�x��c1��Ƴ�Ubp?ckm��|��#���� �C@�ҙ���s�4w)s����W.����X�{���/ahr%lJ�!79 {QN�����$Q�0ۄ�I� X��[3�ؙ
�֝u^V��G���n��N��qAGh!gѦ����rxlӱ!Wr��Iғů�47@M#����i� ��
endstream
endobj
286 0 obj
<>stream
H�\SK��0���@�1�c�3D���b;��Z-u��"g'iR���r6��)\�~^��m����N�Q1<\VF��u�iY���6d&�V���c~2�@��F�0��ZG�>stream
H�lTA��0����)@H�g�
���!9$�?����n�U6-#h������Ic<�$�T,����{�_x��H7C���
H�g��ڱ��i��f�Dnp��a+l�u�Ƕu{�])8��.k{���k�t숗0 |�m��a��oE�[<�n�6��zљ�'DJ���;q�W�Y-��|_00����,eq=�� �%{xs�2k�2�IV���|�0�#��B���j��L4-Տl
�O�Z9W�Q�R�s�i���ۖ/�ޒ=(���Q_Fk��.P�Ѡu�t�(WS\H���Ѣ~�E��>iL4)�a����q��R�,��g���0�5�/�UL�'Fz) �
�Wk�.Vu�l��������#dR`NɁ��D��E��(Z"�q�<+=�ajbr�"�?b� y&�J3qdp"��s�9#�L?�$�A�SU�
��4�r8'��m�~~s�~y���+�s��ΛA$�d^����h� m��
endstream
endobj
288 0 obj
<>stream
H�dSKn\1��)|��ϖ��3(�h����ޛ$�`#>I6E�kJ�Xs�k�>���,+�X�7�9�1�k�m��rV�P�2#�B��v�X6�\�E\�pb�X::=>O}�*���vA=B������M���Ƒ��D�v�[�@"�u���}� �W��xu`����X���O��ET����?�ߏ_�C��SI͓{��0Y���ǃ��蠳F`Fu���4�ͭm�s���q������oY�bg�Λ�55e $�P3A��V�1�e ��~A���D��mZ�\�U�L12v��#ݩ��o���đ��.2��Y�l�7�D�n@�1���e�C���!+��y5m��j�]Z��.�\:s�g,��l.0�ط�Ź��NN�ܕ2�J�-x{uxXǘ�vƃ������.Y(|q�B]��J\� Nz���� &�3��|똵Y���r�� cA�3�p?�$4 ���~~�ړ~��9�g���{��Ox�s�` ��ܫ
endstream
endobj
289 0 obj
<>stream
H�T�;n!��9X�/����)r���H���yh�e>��?�[!f)��q�4+Al=I]�H��F���^H�G���u�(`h��l!�ih)(�䶘V����B?*)4�E���Q.}%�O�����v�J�=�y�Շ��m�{|��}m�[|A��Ӱ�l��1�`M�X�6����/���^�Fsʪ!�~M��`B�pt"N�v�X]{� RJ�eDtk�Uo�,r�����o ��t/.ZS��Z��`8H�ܡ⬎�-"�؏�1�r�X���#�F�����V��ns�C��9{g����=p��ζ9�U%��O� sB��\�bT�����n 3z=���Z��!z�x+�� �o��
endstream
endobj
290 0 obj
<>stream
H��U{xT�����&ļ_6,g�&!&wCDy�ao� Q�$<����&DM @Ā���#���҇J�����좈,�Ti�-mm�f���7m��5��K0|�|���3�sf�73gA �� 6�M������,�-��Ho��7��4���ґ ��P������ � 'kyϺ���[y]� 8ݝ�h��-@�`��nd��/b����`٩�-̯�36����{v�釀m^ox�ϖ�[L���bE��ӵx���wZ�o���0�yT��Vw�f����?�6�N#�m� K�S��a2UcԳ �� �<�7͟߄d��l�O�q, ��Q::�P�1f��̂ݾ��Zޕ�zFt�a#���t��.�9�N������cͩY��]�ݖa˲U���������(�KxD���E����s⠻��qW�k=�����x�=�<.O�g�'��8vFNVR�ӱ�2�EUTO��ZF�I�i���5�ml[�m"���:���뿢P��2!�~�]�o�y~�없�i{�>�����W�t�(Y��F�'�)����'���{�ݑ�!�01;�i�ch 0�0�2�44ch�{g�w�uP�~~�S)��ehZ�V�5jZ�:Q۩}��Oj��=Z\ۗ���i�j�yuX;����?}��y�����M��y��V����DZD�i2��%��ZJK�:��*�29(���@�DTq�^�/`fb6��<�G,,E7n�
���TG��1Q��#X�����A|_�.|��s����~�#8���~��������u4�:h.y��mt--���O��d�+(B��Zz�(D�TBۨ��PwH��,Gc�k��;2����"�ÅR8Q�Z\�(�^�
>4b.�����2�F��[؈��
w`6�N܋ǰ�'h����u<�mx ���<�?�+d�U��]�k��k�+�q;O������(� �p�q?J� &�q��C�G��wbF9�@%�B�ĥ��S��x�g���k��o�t~S��<�q�����p%�c^�� &� �Ќ��u��
����X��b!~�v���ˑ@��z��.!·g'��M�V�/Xͷ�?1����b�M\�_�vz���|�����6����{�.�D���t'mE/�Ǎ�=z�G��E��߸��N3h&�+�h:M�[iUҍ��������ԏN�ی����]�kڛJ2\�>�돆O���kG(�o��!$�������,mK�r�SVY�.1�0(���K\�HD�p�����7�y"̐�+ɐ"�啚��u�W��k+*�ϔ���|1����*l���Bf�0�Qi�5M�c���̭��b� ^�x!�N�;�x iI�vC�jd�T�d�i�5p�����^�$^E٦?��-mVk{����A!5����VS-K�R�Q��rB�)}��<
Fc0,df b�P�L��W���3dY��ђYfD�5(Ѭ���;��K�\���y�(��i谬hؒTcYg3�D���}�W��#�W�9�t3��Of�>� o y�# 7#!����PJ��3��J[��i�nV�bP��X]Z#���V+�[nKȆ� �
���xe�!ǘ5q�:��g0��tn��ZG��"ӫ�r�!T�9��B� B�2 5&��4�cr`�}��s�s�q~#e�N����C�?��UQ�`é
"�����K��S.�/�]��.8?Mm��#����,���ũ��jn�\#�i~
7ze���B�\s�:�\!���V������� e��!!�6�,0�c�h�U.�;��,4�[��m)����¤�Ȉ��\����>�_�F�[��U�<�H*�Z�*������
r��m^�[�m#kgJ���$+�ř4q�M,=�X(a(�/SbV����*6������@� �����ֹ�|"�bi)��L!|>�B��u�e���j����s,��ʋ�)Z�x+:ֈ�g��:�X��eF̡�x#���ˈe(:���Q���G�#��ZIKմx�1JYrN�*��RV�S�N)��[s�<9�R��<G���������9?E���z�Ŵu���;7�@!'`;�!`��q �1��LH ����� @sY��&%%m�utji[�-U�e�KT鐬�K��=lҦM��u���L�4���{醳��;DŽ\�I��w���~9�t��t
�?���?�n�t�����5
�D�n�B횱 �K��� cT��Qkk�ڊ~�C+d�OȦ1�4h�?� ���)^,ֺ�⬺͋*�v�`(����y�!l�7B���zT'�����s����u�ۍ�b��� �<�~4�xr����S[�������ތ1_u0��H@hw,,d�f��.6R�so"܂����)�Ղ�X�:��t��B�S��� -��Y�ёg�1�)���-%����Z>�A�� C�'��1KK?ܮc4�쭤�ǎ�������x x�&��g�a�э��M� -�@�c��L�0D��>"Ɉj2BƧ3I��B!���OK
;�0RӶ寬�}��!�����!$g�H[lx$La����� ���Ҫq�c廋���U��-�J�m�%�|����a��)61?b�n˟��N
�Fc���}������=|��سO;��Z-��)�Z���F5��HEBcV-Nt ��>�v����a�Nj�}b�<[~wt� �&�,������'�c)�jE��F;3�-�|Tzp� N\o�C�!��m��txi�J��{��|'��R� xW�jƥ��p��x���RA[}�����
������� �x2D��0q�%�}�!�?z�0
4������@�����O�g�'�s�(K:� �H'�q�I�0�$p�8= G�C�q'� aW'��� a�)a�ia�a��]�N
��B�S� <-�,��
��BAw�{��᜵!q� w8�qx۲�ŝ81kC:�
�~r�E�ea����.�;AņD��!�/�CxՆDx͆Dxܶey_w�~ņD��
��N:��ې6$ћ�����7I�&��ю��>f�������g���ᷣ�\l�Y�|��'�Ĺ��I�*@|7c.MU@�uU�G�zH��������I��:��^H,���w ��
X��T��w1i����2�W�~�5C��'��A���?��ض��x��n3�̷�33�mߖn�K�.v��lN���4�,�M%m&���B�!��X��d�o�yX'��%����Ud��*��l�_٪�B��x���EԪp��!�����F8lTi�2Lk�ke����w'/^�/>�w�
}���{��>=s���̶���^~��u�<�s,]E�
��V��ɯ#o�3��xB��4�Dhj��BB1�����>��ҹ쩺�S�s��}��L���������M��B�ՌB�������"/�Z�B'X�:���ʉ7����>⍹_�!����}�l��l�d���rE�PH��ytQV�������-^����7��|�~�l٥��sE�vZ�=#�I�dI�gr���Z�:�[����0]�
��3=j�9�X(_�� z��p�1�5���G�98�ݩ��̮z��䥗'&Ox~�7�{���ߟ����g�'ͷ��q��`����T>'E>(��)2W&$N��TI�+�G�@��
|f����T�#�v�S]���9R٢�kM����#1$d��p���N�}�?��172.K2��+Q�l\��BQ�[SdV�KT��_n��@4BZ����/L�%�bS'�N��h��_�#ӎ���ϰix4��*�$I����պJ�1r�!��Ͳ�rK�s�����_r�p��|���UVi���Sq8RU���^c��q���{��Y�g�X̌�et�L�+���at�����j�p㥥�ِ����>�����g�.���)��,�p�ߎ�DWl0Cn��8��#���]N^g����ԻR d�,݅]�BO��J���|��U�"S�*�-��J�E�li��5��Q�Q9���䵑�k�����6�3�JM��̴���>=}����5g��fM�ә_[E��v@�5V��/��%�P�!��E~������������Wv���"�nʦ���ܟ���dK���������Z���
6�T�Ѫ-�1�W/�X�c�����Uj�+5��n���o�]�Iw��K��ٖ�)�3^�S��d�� ���Ll�$U���n^�zy���fy��*���<.����7HM�'Z�!&� 3a;3~�R������ζ��|W�Ps女�<��9��O�K�Է�'�3E��]���Xul������֑����@]�7�6E+J���T��f��d�������
Ϣ�P�Eڷ�gX� q�a}5�zU��Q���k4�:Dg����������}��):�o��'����N���yώX����_4�Tx>�/H+V�l���,���k}�/~ꔓ���}'�(���|��b��h��zP�Vz,����UyGi�Ui}XFRbG�G�t������Ἢ�̭d/n-j,3�а[��K&����3�l6��_n��l =(�n�t� 55Cߧ��m�� k�B��Q
z�!�O���$�ʼ�UU7t�� �݇"礇ؽ�o6A�6��.��i#2�Ś�U�9
Oh�$��.�����F�vflb=�[_(���#�s����}���;TܩT��K��Thy�����,��t�/�C�a�uI��C���H��B#������K�
m1����v�pS?�]�n�=s:�+�T�?�yW�k��:�D�2������E`��їs���݆��ho�C����%�;"/�/�߅|룣���fCg;3 ��Jz�H,Q��Z������ �np�A��Z�`H+Xz*�@�`%�8vD99{����e6vv@�ݨ[�tWU�����~*o
��J��q�Z7�g��F:��k��i
�1S���靱����>Q xǗѻ�\��2\�p&&���^!�*�b���x�,�b}1Q&��
=M��!�� ��`�,�˰X*����:������J�To�aW�{l�-�K�JR�:�A͕y���(�-�_��D����H�`u��jj��M;*?�����)�0��i@2����{
��j������W���o�w�B_;�e�t�^�PQB.��
��d����u�O ��N���a�JFŃO:�;���ܚ���X���������;�������W/���sف�\����m���d2��Lfn*�*F���kKTi�
MU����q�84��?;Տ+��Lt6v,�F��ZC��(���)@�� ��2�LAq8���r��)���-��t��A�����՟[�z�$���P��F����3ӏ1����ǰ��(�[?��������U��=�[��tH����@��2��{���N��6�i_O�v�C�Z�������������0i-�=0@�`�xK���[��퇐������'4Y�&�XS���_0#���ݨ� qQ��4JM�P$FKd�X
WU��>�swv
��E+x��ͺЉoR.� ��kШ9���֡��;�e����Nı�\j6�ӥ�F�����K���ӑI���HM��A�D��^E8+���Z�mO��� �jk=;���3\E��E��E�^"94�����%T�+l��6���?����p��������x]�W��_���G�d\�e|�J� �|�'$�T��x�,�͐�F�z��=�L]0�������8w��2v����AgQ�
M4e�wec� �۪l*"�֟P� g'�.@>mT��h��D�ę��H�A�ޅ�*E/ݨz��F��M2�p������ؗƸ�� z��lp'F>�;��c�;�_2�ME=v��bR�
��/y�����GhQ"${��y��Y����>��G����由@x$?;�.��<�~�ߡ�8S�s3��>������Pq�uA�MzZfR��Qf��n��R�pQ�������q;��:�FM�!�3�A���`S �X!4h��T�;{�m�7��Li��͛���y�O*_�A ���DX�$2��c`�U(e�d���Pp�-��x��.It�����<%�U�'}>����jb:�l ��)pQ�N�$Dc2*!�CS�먩*!�*��`�����E�FwDU6{��*j*(�!�d^��M��" ��G&�&/Y�t����[?�������n}m�'#�ԛ�ok��+ЈF�r��F��r��F���3�Z<^�/�s��;�!\���.�W�� ���M�4Ώ�����2��B�L%on�-�E�4����md�C
1^p?� ��X�`x��*JN�W�z�^c��lr��ҰJs���V�-��+�x0�C�hd5�Zf�� ���/�F{3ޙ������/\Z��p$Q�����i����@Q+YpH��۔���ƬiU+`�<"i�2i{�x�Fj0�² �g��!�!iT$�X<�7��5�\�c���rfph�Y�3������顦>v������h-�
�F���X�L{��Dv%+j��Ի�-pv�ZP��_%ʊF���!�������{��6u���{c;!���6�ۉC�N�n�&4!���{�����,>���s:|J�-�^���m��ukn��B�M
M�Vm9p�
�q�wz<�pB���Ad�qJ<8Vg�5�㟈K�o;��+���æ�8�>���׆;ث�!KT@k��4#z'�)q9�N/�<K�Y6G �17��o�5�պ9�����͡h���_��M��+w�ւ�^o�����V4�V��o���ꝭ�[�O�
G׀���ka=��,���=�;K�:w~X<}�����q�����Q�G#9���e^��.y� 5�o܀S*�ʰ*ghxℬ�?m팭ܭK̺֤����.u�
��*C��LX=��ۥ�>�5�X���ćT _�fd�����w��
���/�.���%�a������ѷ�5Uuy'V4f�m��v/�x[6W�/��K�4����3���E�g��MY;nR��g�,?�Tz;ӹR�q�:R�s��Z������G��#'ȿ�����N��4�A�M���Ljyޒ��T�M�؆��)o��ʻ���Yo��xMs揱H�3��>��{���^TN�2Tυ�Lҳ7�6T��!�Ti:�Ȼ�+�=ȗU(��NF���:�dj0�y��B�I���\���y:���"�V���f��h�,F�.r��=��i�Cn�.,U�v�gs�U��N!�O *�@q*�g㜹�{3k&�B,�w��8��Tk3jG?�v����h�I��(|�E��0m��r9²�X%�G�,�yۅ0��E��D$.�.bp�n��s���=C��@�[yb�
� ��W8���d��(Su��� � ��0��:I��Rq�6�G�F�ld�d���1�4h�L���^���9��(���?��w��A�p3u3
��=A*�]d폢/�M��S�J��a�"�lk9(��7��7�]ދr�۴���9�kL�p;��Eqmd��L��,������)>G_�<��7/�6}L}�]^�=���\��^-��,�^ܺ�}Or~�ݴ� ��'\gb+�Y *��]3|a���J|�/�E�c �퉴�k�=΅�6B>NE<��b�����Y�C�+ Q�@OZ��NY���x
�ղ��>-3� �2� |3-��ϧ�әȸ\����#V1�m��l��O�����d��X:d�N�D
�31�"-�N���'�z�_��ُ�?�~3�R��%Z�4l�B���@O�O��r���=8�3�<�Z͈
31��5��?�r^B�P�7����:�yS�+JD�D�ؙ�5�H�:��'d��6tnjm�����Vv���=�2��#��x;�����`9o�Zz/S
��
��g�>�.�� [
PZ����o�,sqTC�
�4bH�6�-��b;��\��5��ֳ�^r����4{6l뤶�F�>t��1���|������R+���|���n���6P�&~k��v����YIy��~����Ai��ι]O3]z�Ǖ��~�>Ƈ�{���6�������3�h(W��r�n[�kYf.(K,��e5��:c9��a7��t����/�Σ�Kőu��ȑ���G��*�� �^ǸO��v��4rd����;Y!��<�唟�G����ї2�-�F�#� ŧ1
endstream
endobj
291 0 obj
<>stream
application/postscript
DAT_Final_Logo_TM
Vada Ortiz
2020-03-06T13:21:14-08:00
2020-03-06T13:21:14-08:00
2020-03-06T13:21:14-08:00
Adobe Illustrator 24.0 (Macintosh)
xmp.iid:27094e6a-cb8f-433a-943a-978bb4ed93e5
xmp.did:27094e6a-cb8f-433a-943a-978bb4ed93e5
uuid:5D20892493BFDB11914A8590D31508C8
default
xmp.iid:420a385a-3c21-4b2a-8f74-5a7deeb1c147
xmp.did:420a385a-3c21-4b2a-8f74-5a7deeb1c147
uuid:5D20892493BFDB11914A8590D31508C8
default
saved
xmp.iid:a23358ba-7c43-4ec9-84df-9110a746d4e0
2020-02-06T11:33:44-08:00
Adobe Illustrator 24.0 (Macintosh)
/
saved
xmp.iid:27094e6a-cb8f-433a-943a-978bb4ed93e5
2020-03-06T13:21:14-08:00
Adobe Illustrator 24.0 (Macintosh)
/
Print
Adobe Illustrator
Adobe PDF library 15.00
1
False
False
9.906568
1.584976
Inches
Barlow-Bold
Barlow
Bold
Open Type
Version 1.101
False
Barlow-Bold.ttf
Cyan
Magenta
Yellow
Black
Default Swatch Group
0
White
RGB
PROCESS
255
255
255
Black
RGB
PROCESS
34
31
31
CMYK Red
RGB
PROCESS
234
33
39
CMYK Yellow
RGB
PROCESS
251
237
29
CMYK Green
RGB
PROCESS
0
164
79
CMYK Cyan
RGB
PROCESS
40
170
225
CMYK Blue
RGB
PROCESS
45
54
142
CMYK Magenta
RGB
PROCESS
232
10
137
C=15 M=100 Y=90 K=10
RGB
PROCESS
188
32
46
C=0 M=90 Y=85 K=0
RGB
PROCESS
238
64
54
C=0 M=80 Y=95 K=0
RGB
PROCESS
240
90
42
C=0 M=50 Y=100 K=0
RGB
PROCESS
245
145
32
C=0 M=35 Y=85 K=0
RGB
PROCESS
249
174
66
C=5 M=0 Y=90 K=0
RGB
PROCESS
247
234
47
C=20 M=0 Y=100 K=0
RGB
PROCESS
213
221
38
C=50 M=0 Y=100 K=0
RGB
PROCESS
138
196
64
C=75 M=0 Y=100 K=0
RGB
PROCESS
54
178
74
C=85 M=10 Y=100 K=10
RGB
PROCESS
0
146
71
C=90 M=30 Y=95 K=30
RGB
PROCESS
1
105
56
C=75 M=0 Y=75 K=0
RGB
PROCESS
39
179
115
C=80 M=10 Y=45 K=0
RGB
PROCESS
0
165
155
C=70 M=15 Y=0 K=0
RGB
PROCESS
38
167
223
C=85 M=50 Y=0 K=0
RGB
PROCESS
27
117
186
C=100 M=95 Y=5 K=0
RGB
PROCESS
42
59
142
C=100 M=100 Y=25 K=25
RGB
PROCESS
40
37
96
C=75 M=100 Y=0 K=0
RGB
PROCESS
101
47
142
C=50 M=100 Y=0 K=0
RGB
PROCESS
143
40
140
C=35 M=100 Y=35 K=10
RGB
PROCESS
157
32
99
C=10 M=100 Y=50 K=0
RGB
PROCESS
216
27
93
C=0 M=95 Y=20 K=0
RGB
PROCESS
234
43
123
C=25 M=25 Y=40 K=0
RGB
PROCESS
192
179
153
C=40 M=45 Y=50 K=5
RGB
PROCESS
153
131
121
C=50 M=50 Y=60 K=25
RGB
PROCESS
113
101
88
C=55 M=60 Y=65 K=40
RGB
PROCESS
91
74
66
C=25 M=40 Y=65 K=0
RGB
PROCESS
193
151
107
C=30 M=50 Y=75 K=10
RGB
PROCESS
167
124
79
C=35 M=60 Y=80 K=25
RGB
PROCESS
137
93
59
C=40 M=65 Y=90 K=35
RGB
PROCESS
117
77
41
C=40 M=70 Y=100 K=50
RGB
PROCESS
96
57
23
C=50 M=70 Y=80 K=70
RGB
PROCESS
58
36
22
Grays
1
C=0 M=0 Y=0 K=100
RGB
PROCESS
34
31
31
C=0 M=0 Y=0 K=90
RGB
PROCESS
65
65
65
C=0 M=0 Y=0 K=80
RGB
PROCESS
89
89
92
C=0 M=0 Y=0 K=70
RGB
PROCESS
109
111
112
C=0 M=0 Y=0 K=60
RGB
PROCESS
128
128
131
C=0 M=0 Y=0 K=50
RGB
PROCESS
145
147
149
C=0 M=0 Y=0 K=40
RGB
PROCESS
166
167
170
C=0 M=0 Y=0 K=30
RGB
PROCESS
186
188
190
C=0 M=0 Y=0 K=20
RGB
PROCESS
207
209
210
C=0 M=0 Y=0 K=10
RGB
PROCESS
229
230
230
C=0 M=0 Y=0 K=5
RGB
PROCESS
241
241
241
Brights
1
C=0 M=100 Y=100 K=0
RGB
PROCESS
234
33
39
C=0 M=75 Y=100 K=0
RGB
PROCESS
239
102
35
C=0 M=10 Y=95 K=0
RGB
PROCESS
253
220
13
C=85 M=10 Y=100 K=0
RGB
PROCESS
0
160
74
C=100 M=90 Y=0 K=0
RGB
PROCESS
32
68
150
C=60 M=90 Y=0 K=0
RGB
PROCESS
128
64
150
endstream
endobj
292 0 obj
<>
endobj
1 0 obj
<>/Font<>/ProcSet[/PDF/Text]/XObject<>>>/Rotate 0/StructParents 1/TrimBox[0.0 0.0 612.0 792.0]/Type/Page/PieceInfo</LastModified/NumberOfPageItemsInPage 8/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<>/PageTransformationMatrixList<>/PageUIDList<>/PageWidthList<>>>>>>>
endobj
2 0 obj
<>stream
H��Wio���_�ـI�9` ���)�k�X,�`!Q��H$mJ������=�C�� 0�i�Q]��z�ip�Ӆ��%�gG�+%f��o���������I�~}�8:�|�|~U5�M�`2��-�{S��BlokIG�Z�o��J�=lΎ}n`a59���d>89q���ۨ�fpN�L��Fl����z��.�[�kr�W~�{p��{��b�����(g&��Č<�!�_h���t�a�G��A�q������1��9�\ߊ�r=zw!�Χof�F�~#���G\4�k�D��o4~�|F��}x��^n����R�\�HX�p��Y�*A����n�,.�`]���VW��������/@��������#� 4/���k�9�>Ag�w�hF> �.U!�U��S9�c#���31����"��D�~��D�R��Mԇ~s9���ȑQ8q"N���n��Ǵ��Z�Ϡ*op���3�,h�qB:�q孈��B��`�q����fV�q!�Z�,a�M0��w�Q�t����0��@I�Ǵ��e|��̏���e��M������_T� �X4�!;C X���/�(����8H4"�\�3x@!c�O�@�.�pbx%-���9�z'KDn!=n�����G ��q��$;.X��@��2|k�X��W�$���1��:���K��U���I͗,�Ҳܰ+�&er��]K�}��z�n�2�.�����0���?T'#��(�R�J�����=��[i4n.H���#�Z� u�m�������[R3Ff�������ЍTۃ\顦A�I]���3(�%�r"�Bظ�Wv�M��-%x�C���'�8��D<�����G*p�k��8K����9Xo.�f�Y�v�;]�4�$x�K4_����9Z�iEil)Gd�����1D`�H=K��K���h�r��<Äq#>F��U��{w\�Kz�!�}|�cǂ=��pc��\�`DW��[�"�)ry�:권���]�� J����_��zֱ�/;u��y2�qZ�\��+����U���f�/_IxIقY����q���#q1����5����o���=>�R��)5X��.�y&;����f�9r�D��B��bd�E*��B����ޠV�9g� w�f���=g�G�~؋6�/6n%��(��.}��t�ш��S�ò ?��!Ol�}+�x�$�ʷ�F�PY��I����.A�����a�3���S�%�ݒ�+� ������$C�Q\��yِ?r�&6 ��#�ڵ-H���o���5u������;�w��f93�6"=�Lf�ϧ8~c=�#���=�Z��ymY{��q�5�(�+Y� �yRTK�e_�;Y��~v��~z��Y�˨�q�I�VA�5�Qپ���<�6�Vc/Ĉ�� �ʊ��U^����
ļ�$%�X����4��WA���i���~���"W�L{&����T.J�W��o���Z���$ ��M�
i�G~5����Ӆ��J���GoK3%2˶rM-H
H,�y�׆Na���IJc[t�}Y�.P����{ R�9�o��50i\x�YR���А5�
mK"c��h�:��$�w�4� ��w��q�hҞ a��9��s#�O�Ylq��Tlj���U��o��_q$u�� �`����Y
�-���C����"�}�����"��ԡ���A����m]XȽ�rWg)��������9\$U:d
ì8:_�W`�6�AS<�U$ʭDQ���V�ic������1DL3���0 ���+�%��
�W�B��j��j�)g�ʶ2��4S� (=#�`{ɵå
V��6�F���&�7?b��k�5��k/e�K"]��c�A� �Z��Y&�ξ�Տ�����s|m��������X����������2$u���3)����O6$b�R-{��;�w��M������&��<��u��U���;i�+�T��=���H��v�!�����j%����B��"i�Q���|��}˃$��-*f��/? �����q���K�����((�gG��4����C25-6CV�����.�yZ[��Iˇ�Im��8!3]�^
�����Jn�$1�>\�����%R�3M�= �\��Snt�h�� �D�^�sA�6GS��xꈢ�6�~
�VH0�ˊb �3S�#�,���WT0�Ո��B׳��g�b���R'�Ҽ�DW�I�Y��q��Y�H�o�I�f��i2��4ԩ�)��*���+VR���^e��W�W��Fk��4
` F�x�<�/-�4R,��ny&���n��lq�O�&k��9�d�n>�Np�� ��òm�Ć��g�W'Aa�Є�M/�ѿ�p���R 1�p%v�P¡8tuHb���> uiʐg >7�{�85��W��Z۟���$�3ηv�-5s�
B���bU9�Yv�pZ�gzp`b?h,������Ч5�u���m�I�h��}���
����r$?^mh*��w�_M��1�/h^�~_N���N���
������גuZS1�K��'i-����1ᑟ+�r�����eqC��x�|���Gxg��S�A5$�w�@%�a���F���������y�g�]��.��ę<���p;T�PD���-�)
� ����V
U'����bN�k1v7�"��u"�"�7�3�=��dr9s2�Z�~ Giny�?T�N�;P�ÝO'~�fyZ����@����+���RI�u~��K`�,/�$�in�@�-�G"ypȻ��5�7xX[{nA�,C�(���\m���}I6m_�& �0�.�&�Hqx�`D?M�$x�p�Ȝu���<7{��9�=�@ƗWf ]d�?S�>j}�ȶ@�nCR��bgL>ۘ0�a��T0�1E�S*�FdX��z=�Q�GX)��4��A' :!ʋ,�{ȎFd�ތ��
n�4��K�u��,����Kk��U�^n��D]+ܼ�;ɨ�A)R�0�hÃW�?_XI���[ o�0����D���iPu`>��5A/��Y�o�;�A���bb{}�b]!�t�2���u��s ����qԿq�)��U=O���?b�����O{����V��랙 ��Q6���*���Weh,<3t�q6��/����z&���6��M�
K˲�r%����#�<
�t ��}6&�'S����(?��Y9���Se�20�=0����͔&<%^
�bh�#!�L&�f<��R2�;�D�>uy6���:!&�����L�-&/ ��%C�6>����)hc�>l� ��6qqNig� |1-1.��]"��v�L`����&u�#(�]kF��4�p��x,dցw;�R(ٶK h���8�=i�(��?g�ȯ�
Hz��:ⓣ����g����B�x@n����|��hrz�o��I�U��oq�>��UD��j��kf&?��;��L������p9W����ד?i˲����;��t��O~�{���"��DC��u�>1
3����H�����7'���@{o��@�#n���c�����Є��,��B�2�rH�Ь>͈��LR��=(��o,"�l��kE�D�
�E��\�H��P�\Eh"�����U�q{�)m9Q�,��J�Q�h��K�M\q[�mg8���.I���h^�d���xv�:w�a�{P^3�=7r�w+)���=�W���u���?�hGsgk�}o
�8:3
B����J�9�'�Pz?*,1 7�3�
~(��N�D�\� 9X4Pd�U��k�<�h��M�G)�+�jZ�Xf�������Z�(�V�?!�{R�\q��2F�����w��
Y��b�mB ��?s&�#`Jմ�T�z$�-P���� �v������a^W�05�_�ә�)5��9SQ3?h.+�T��ѹ�Bq��>Fi3��L����d�HLR��w��M��Z�Su�4("�]*����ı�Aq�ɝmz'S&�r�s7E�5{6�b��7� ����ܙ�"
duٔ�:0V�O(b�a���؆+�G��1������㬞nΛ��>�Tg����%���m��G� �:�V$ �^o��}D� U�Equ�/O�Z�������#"�a*įGE���l���;a���a+�Όr�
��&���(a�
���8N3�]e����;��!�6B/^]5l�e�YQ|�(m�b�~m��
V�$�!&d���E煙P/n����av�?y�x`�v��U�=�_+��]ۛʸ1��V�
ɥ�C��Se���LG��3���~ J'�2$��Ǎ�8�^���.��0SD44���v��'�2��7�q��W]~��h�F��tQ�z��sX�f��Q�p7W����S�8J�K�ls��h"I�k����H�6\K��ΜB>��y�$�1~1j�PzƜ��R�;�����|ֱ��p�����Ā����V�CkN��ƶY��5�eȄ�Mx�)J�%�[�i�T��-��JE��N��#�lL.yNȇ^�"h��J�eX%�f�C���v�Y�zI�hC�9;��wB�h,X>�M�Z(ޤ�S�un�fZבp�ʊh�
�ș�7�Y��o��_d�r&qyݥ�*J+�;���]�[�� M����L��L��q�U���A�i��fX礛U��]��s;@,Rk�>������4���s� @�$��Y`{������~'Ae�#<�h� lS>n��*�4=7�ơ��ay��$^nXap���![��9�[��6�=LHv�� n��'�����^w��i���4�Y(~X],�~��u�ԝf����|��S�r�$|��3Qў%ky~�L}���"ʚc%���㦕��3� Z-�ޅ̆��uV���(
MD%ѬLV��f�֙,o�]�z�l!��y�'��:q�?q�D���j͉]�p"�ii��O�J�|���WE�F�1?��bR�
&H|���EL�/�x}�v�;��+���VQ���g������Fn�+����#�Sտ)l 8�.��_�[EM�a3�֪�*0Έ�Ypuz\K��d�"�ى���P���_�0:���b����i =�$��]���A��r��|��9zO�!A�
�z�쫧{v1��V1%�����3��=�"*��W�D�m���ߔ#S>Tי1�#Iy�M:���S=�6+��N�M�B����N��&�� �{<��>W!>��h#� �ҧ����o�/8�8�D�S�F��t�p��W����Tl���*&-��$(��v�Bl����P�(��@�_��
�O� �g�)�"��{� �A���o����#���z�N�<Q� ��!�v�k��3w��D�+��R�?����0�w���r�G:�1 ���CHT(���sw��s�ei�(���{�s\�x�۾N8�3R����Y���M�y:�~��,c��,�_^l/������ťQ
z��ٿ �XZx$"����O���I��5���jx3�98.6�m�����K(�9�����ړOZq�=��]�w�Յɵ^����^.5��惈�M�Nh�}� �l�c��:?MV�s�4�Q��R[����i�qD�"�(�0�a
�tĉ�2��D~D&�D��؎|4e��W����j������M
��HQ�A����5 �w��&�V��W�[z���\��ij�,�YS�^��Y+�K��Pw�f��[�ͫ\�N��� �e��
endstream
endobj
3 0 obj
<>/Font<>/ProcSet[/PDF/Text]/XObject<>>>/Rotate 0/StructParents 2/TrimBox[0.0 0.0 612.0 792.0]/Type/Page/PieceInfo</LastModified/NumberOfPageItemsInPage 9/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<>/PageTransformationMatrixList<>/PageUIDList<>/PageWidthList<>>>>>>>
endobj
4 0 obj
<>stream
H��W�nG}�W�#���ꥺA��r�8Qb0����VU<��b^Y?l�f�^|��F��q�п�b���������~q�Rԝ� �g�cQ��
eG�@�x���
�TP�Kv�Z�G�Jh�8�&ø�F���1;#�ê�p�q�7xV;M�\���(��Ww�Ч=��� ^�xEAz��f��DZ���?�G�W�%�*l֨��ux�'��ʗ+D
Ie�H��Ge��%{����Ku�s��$�4t;A6�%� n#��Q��|t��Ť�%���M�}�;��\Ck
<�y V��P�]��1�g��k�W���R ���?�-nT��d(�:
�R�οu�a�(y���QӮEڞX<8�!!ȿn
��s��
t凌b;O�W;"u�|b�)�;me���L"�tҡa%r/�4h����T�-�>��U��c�o�^���T�SM�1ē���^�ε���S�XC��52�*rb����V'�ds���1�F���nm�Ze��%�7/�9��w�8���@Hd��Ui�=�Od�'��>mF�����ڮ�](�{�פ��(�o������)�$�w ��12����C��r���V��� �oe�Y��Fg�ѷv�'��LT<��Ts���O+Ճ�.�#��x9
���I�?%�ݣ����\�������a�-�̃�v��v@=�h�':��24W�?��X�]#�0�ܾ��M�{�AڟO��m_d���]�ЧL��3F��6��-.T���7jITQ�X�a�� ɬ�w��?(�K��?�j�#��4��Ұ>���(����*�Sl���ʅH�B�wU����_O�FR�u����4�F�1$�%�������mNY �k����&�:�-f�u�$��X/z�č�Áa�Q#6U�Q�4a���TE!�IŽ&D�S]�xs���
�/�I�[�JÞ7!�F�K���r�� �D�lc�(�ҭw�/�H���������4�e��Z�*d���k��j��I�I %�LZ���p���b/ᔆ;��7�NPt����A���r� �6
i����_2���J��S3f�hq%Z�I'1�����ؖ=Ie�*�]Ճ�-d�]Ӑ[�!״Ȣ��C*��@jQ
�EX��� mb�B�e?{�w��c7����)
�8� ]�oǶ�\�e���-/9"�^�6�>BЄ�(�Ghi���;�ip�S4�z�˂�/�ʣ�/��&�@p�� n�+LAn��4W�&��B^7� ���U���X��t/�Ÿ�>cs��
~K��`�nĝ��A
��fV$@�������5؎��E�����?�d�/����.��DZN��A/>i�� ϴ��?�J���M���Ĕ�Ўy�O�s#�|�X
�Ǝg�w?A�K�xIo��PY����pҡ��)B�W��|�"m��z0�n��C��xM��M��]��
V}����E��8��G���q�� ����~�ҚT�vw���l��gl�o56w��I�aG�m{��� �N�z�«���5:��$�ƯlO~C6�k�%6��g�Y���b�p��� �X>�ю���=���
r�yW��R���wj�^���dc뼰Y���X���?�������Mz!�É���k���M�^z֎�:�
>����)�ǝɐ;�Sݔ�B9O�%|�oWqGɜ��;m'ko1N%£/+����M��������B�"q�&B5?պI���ɭI�����;Xm�}6hM��(2��>����ѧJ��i�h)>�ֵ(�o"����$w�͌k$'�<8پ���i�a���D#����{ xufu���ؽv3�_�qq�A�U)p��U�{5Tl% R&��A�7)8]�䟉�@؛��b֓ YF�� )�/���۴�k�14��U6���_�j�FۗW���'��}H���EA~B�@tI
={���<~�-~���;�S.gh�Y���3��?v��ŭ����w��N�iy�$�<.,��4��aLjy��5��U�J���q�Bd��s�(
�豬c,��ɐ��YwW��>��j���Y���b�$29�.B�^�8˪n(�M���6��H�ߌ��]C���S���D�o)aCJ� ���r���
���f-1����2���;�e�
��TBd��|������4����v�������
~�iylv4π�^o�(�n{f f�Fn}\��˦�m�w�
�:��� �����t��N�ܒ"[N�X�"�I����� �nz ���w��N�d���̒�� ӸRO�!�J� ,�,o��|I�wCi�����F>c���JDz�b��Ʀ�- �Eӌ鉟�2G��!�H�05@�lj�'cC����n�6Db��Ť�W#m�h��"�U��r�nW#0�f*Q�G�[m�����J��zЎ���C�Á ry��G>Qޯ5�~��
& ���vC�A@���.����
�]�bZ���,��/!�c
��iO�����C=ߡ�x����pr�;!9C\�A��F���cZ�I;�|�ks/�A��̏{��:Y�R���W�,��n�X^B�{ݣ���T���lq�}��;���B�L�5���by���f��`����2i$� ��C � #��ObV��@�! �K���ՠ�{{��ɑP�2^�)
>�%�}�G�/���0wl�c��Ɩ�It��d��7{}
���}�����S�rr,j-�Eb;G�#8;�&�!ӓ`�m>D-��q��_�_�'j��K�T���\,9��K �Z����b�'��u'��$_h�PqJ���"aa��G��H>�ju����@#,k
-˭���A�חՎ��8[U����H#:z
�Y(�`V�u���$o)�ܞ����R���Y
��-ڞ�7i���W�x^��'
Ξ�n�R�|�-�b�T�H�5�AٗΰY"�D������.-E"Q)7S��k��f�.�Y���=\:�}1�H�ꑴ>do4}q$��^�sz�������Ɋo^���o�k�}�|V��b.�3&�i��E�H�H_��Xg候4���J�l��-�K�u��+��>�M�&iD�#�e�\�X�|�Z����C0��ߦ�Eyd�"'n���2CȣL��~P?i����rpW�O�{��peK3+Ô�/h-���kJ��
fFI���0��F��ZD�6��=�d磶-O�hf��B�'�ZG���ǃ�,5\�5� �̓���E�o�0b�D;�
ҡ�Q�ĻJw�a
b��d��[��4�|�Q�"� {������n ���x����!�%% _��[i�;
g�F���8���g�*�%o?�
�ϒ�"����ZU(�^K1}����S�ٝq.+��;
%���g�L�����5h^Z����1�ߵ�)�u��c�3Y��n,.�-�N9Ma�R+J7d�!7[5S�J'SU��A��g _zAՇnP���Z�����H[�����:�f/�'�;4@�&4��� ')Ų��g�}Gڣ�V��$�q�
q6q-���˲��'m�J ;��~<��y|�+,i|�뗜K)��=&W%��6��[A�D�L�5SA(*�O�
k
�F����jKC� Ϭb�`W@j�w�H8�&mK� �� B��݅̎��Dp��g�t���(d�c�w�7�bb^��|So���.��pDn�~���j�!�����*$�i���3a>�}��������`���fė��Z���볅��v���o�ޝ}8�9��`t!�};(�;Շ�N����g���:����&��䈜�h�:�e?���w��͑��` �
endstream
endobj
5 0 obj
<>/Font<>/ProcSet[/PDF/Text]/XObject<>>>/Rotate 0/StructParents 3/TrimBox[0.0 0.0 612.0 792.0]/Type/Page/PieceInfo</LastModified/NumberOfPageItemsInPage 14/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<>/PageTransformationMatrixList<>/PageUIDList<>/PageWidthList<>>>>>>>
endobj
6 0 obj
<>stream
H��Wmo���_�O��H.�/@��u�SN�6�Ez�j[���Q��;/���v�|j���.9�y�g����Q��#����|6�tu�����������G��w�?��w�ϫ����NN��dyd��Q����7j}�[:^.��jys��V� ˕/�P;���h����Ggˣ����p�mr�N8pr��S�Z���*+� �/\P�m�緱��h�|k���zt��`�<8�X��>�av�V� N�\��40\����>IJ5�ri�n��D��K�����[5�~���>�a��TYk�<��J���D-��d
e��U5�0��4�:;����\�m���Q�C�ﺰjv���Z�aݽ���V�K=op�G�^�6�7��Oh_�J��R�����]�C�k�&�h&n��w���h�(��x�g� ־����ᇕ��g͇�i�4��N�'~�����K����q����Kt���x���ڡCW
�V��# ��<jw�~��_VC�����<���L���])���sN-�'�k&�ӿ-����n&ڜ��U�K5�OOj�~U-��^C�b,MDʝ�J*8� (2�
�C)Ʒ+M���%+�;0zO\�i����E�#MD���t��{��H���)%҇���7rT^�`y�|��X�x��)��T�M쁶���C>�;�����-����0J9�/̞ܿ�*�{�_���;c\�q���`�o��ێ3�s�{x|m;�9<����w|�҂�^w�u2k�����h�nίFK���b���
�����ppÎ7o:�X4�/���\G��M����ڲ��|a0,�mͦ�հߋ&;!��A61�Eײ�@���\����z���a�J��\p��ٯ�:.�/zzf���,i�YCg�u�}pu�<œ�o�jt�w�s��O\ mWކ�ð2k�4��gk�0Ɔ��ȰHZ�F���K�k>g�<2!G?��c�+*���d��cұQ����:�DJJ���%6X��Qہ��@�B�D�6L��K���Z�&f�R�Ai�8�%�����F�W����d$��4x
��F�aU��/d+P��e�D�Rmb�Pw��&�����b�(�����Xz���O85Js�ґ���'��7Q�����r�b1?�'���o��Y1uϞt�)�Qc �X~�c���:Ɗ���ePn����hh"�}��\J�'sٕY�V�pA�R}�����T����m��M4�y��U�0���[e���_6�q�@�������(J�6N��Ik���(Ҥ��qj;N�_���PCI�֢.R�[X��p8��ym��ߪ5W�Hmn�Tzg\(�i�=c�zq �����w�/�J�-k��,Q���㥥{a���aA��� Î`%|��X�������34,��������ꀮ}&G��&�1���XއK��~GӼ��f���
$ؕ�f�Y^��N3e�����+ Ͼͼ�-�`�#����8�&�8���n3��Uײ//��eC~�{���?Qqfr"��G�wr�ym/m�a\��ONE��
��S�un�:�����K�Ս�$U��JŘlZ�05��n^�ٮcoj�؏OC̫2����U��J�a�F�
���ٙcC���������:��&�H��RBY-�W�g�β�v���Wyղ+R��(*h�C^E
TQ�V�����I�v�Or
MI����8�cB��f�"�=��ӈ+� 0�������LDI6�D��e4۫f:n����Ѷ��J�,����i ���i�I
M�6�Z� ���=��U���]��_�k�i?�Z� �ihQP��Ȿ�+��EH����.hub�w�cu���=��[g��$�Ce�˒^�ݍ�ދ�g9 <�VD�W�B�6
�6+��wr��Ĭ�'��=~������H���[����N
��S�jު}�������wj�^g�U����<��
d��vd�Σ4Ч5~Jީ�62)K�7��:�C$ ��z�y6��G+K�4��p[��ы�b�$/,O�sQR縻�UZ��{Z�bpoL��74�p��=�T����^k(}j�
mQ�r���)/��7�d��v`(�,��ɍ�b�=�]��-v&��Z��?��?ĉ~�gBi�kR�_?
�Ua��1���A.���9�j0��������q�?�YRa{����9��Q�l�����y482�ԫ�N6{-T��+��>�)j�Z��'36p���k�vzSWP I#hѥT4�>C�Pq�>�TlB<��p�f���"�]�fm�v����.� ��_�O@�UM���m��O:���ܣ]<}\3H!t$����n=y�Q�����H;�K4W�RV4;���uT�-Ʉ�nz��K��Dl0mI�2g[�V� �������b���!xc��;�Q�g\�P:R!�8;_�T"��P���#ǒ�;Ra�*�W�3V�vB�)���~��Q����p��8jf��d�Kh<m��A�mF`S�C2� oLVG�j�G�s�r`6��#�b5tG!/h�FE��W���]�M�h[d-���*����Z�]Z2Zb�80G�x�ĸ�6��
O�]�*�'*)��P��ˈ�X.-�2�½��wo_���>~|���`�7�`�
g���Fo~����L�]~���[_��:wu��Zm�-
6�/���b�U�ca�ӳ�?_�-�.~�'6��,��e2�q(��Ä⋖��˚�wb�kK!m���
},*���e ��*���)]� �T�����2P
����Ee>����`�X��@N6/��h��7���[� ܆e
endstream
endobj
7 0 obj
<>/Font<>/ProcSet[/PDF/Text]/XObject<>>>/Rotate 0/StructParents 4/TrimBox[0.0 0.0 612.0 792.0]/Type/Page/PieceInfo</LastModified/NumberOfPageItemsInPage 9/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<>/PageTransformationMatrixList<>/PageUIDList<>/PageWidthList<>>>>>>>
endobj
8 0 obj
<>stream
H��W�n7}���c7��l^F��da#�Z�}p�@�%�]lI��|}�Bv���qk7�E���.V��:u�qq��+).�R�Y�U�\Y�� �.K''R(qr��w�_��va���^\/۞�{q|�8~�>���5��z}��x���[��?�3+m�W����G��]�E�0*�.z߃'�p���|q���c���\���f��W�����GB���YzZwh�b��K�v �'g"->% ���U��j%�$^?�]�_���<��2�-U��+���F*����R���g��~V�+�*|K[��&�Z��M��ud�������+��c�G�"YR�R��ǁ-�M�0��X�l6������]: ��'Gz���:=
�'�tm��\=�=@��C>�lyv_���=��B�G��Z���-V�����}�:��v�v@����*����������� ��Mh�7���ghގ;��E�#����M��\��Ř3����T�9n��R ��|��O�>P��F4E#�m�VFI#��UckL�#0Cj�*��#c5e���?�U�Ա���}.8Qsp��ܔ�"��\�ə�NJs�m�6���ei���r�F��qf��=D1|�����;vw�@_9�U�S;�/�.�`� ��2NO�T���3�Х���21O�j|r
nӛ�� ߪ�ŵU>��
h�-�w8�mN�c�v$�A�!�3����ʜ�� �J�%O�䑤vj��9('CUWͪ|����1:z��Q˴���g�B�u6@}�2n[�l��߷�p�yk����\�`���-f�yh�h�m�m�\Џ��Ƃkn[����mȨ�Y����A*���i��_7o��q��Ilv�eсs�C���C!x���o~k5v~3�=/Mav�<3��u�@�_�6��k����R�PF�����knZ*A�j������CKUk����Ͱl�TG�oa?Y��9T�x �0@$!�&|��ٚO-SusJO,f�g|Z��/c8�H�/�r4f�#d���������(B� F��QI���+y3l�$�X���PvlY�f�>����;�!}��_��f��]��.#�����@f ����`�y?H6oS$��1�(cY��}��E+�n��D]L���Y]��d;��{�{d�n*}�|Ocb��!�OS�)��hd,�u����D��x��0)GW1lӽ"�J&�_Ƙ!*���}UW1���.�6Oh&��.���,kЎ�3M�P�~�p�p��A�+��J�p\쉍K�UH߬(�H!B�p������S����\1,ΐJJ�"j1���$�I��Z��紵%m$�!O�+Z�����J�C��m��ʆ�D:��v��lq�����f��h�4�$�\/4�'���È���x]Lʐ�
�X!�@O��st0�U���\t����3��S���魫ʭ��Փ&��;/A#j]��P�a,�Fj1�7c��$����\b�TC���7"�i�7h��ɧ����̒��(!wF�DQ�BXANTUi%k7=Y� ��,i:�W.����v�I�:�(tUV�o����0 ��n�|�L*��F�]U��ů2&�%lx�I�q�+\ʽ�Z+&�� 2}MKř�f#�)z��w%~vW�0��P�7̤B%P
�*=�i�@ϴM�.`M��鴛v.TJ�P^dN�a��\�[�������͛BWw�����$���É��:$7��sY�G���%��eˮ��T�_�:n(���jf+'�@N�$b�����'�z�ƽiaѣ��ԛ�&3ԃ�AL��v�'���8�����5LjZWs�]ѽs�����K�/�+NH��ܾ=�K7G��� T�#���W�آ�VU��"�Gs-
�&��j"�S�:b��ڙ��i^�
FE�\Aw���[���8��8���TQ�G����f"�)HvT�R���d�.WRZqr&�i�Y�q �x[C�7FX�M?Q�&o��[�A����3QgW���+�d���@ro�_�x$է�{&~z��ϭ�N[\{�50�5o�D�t���-+��d�*��6�ܴ��]��P�8�����n��Y^uqO����&�d�=]��}�w�l0�W���Co_�)�U<�k�+����m� �#�'�ׇ)�Þ�P�tOjj���Y�/��_AR;(
C
e7�i5�AI)����gr���Օ��O�
��e�(�e�'�ۉ��R���.EJ�]^����]��i�K~���
��\�g\�� �s�R%��~r��e
��P��w�-�ܷ���T�<��V�p�}�.��U�c
tf��d����$u��CC�]fF���.5��Us+&ބ��9p<S�z��.,�i٧��Q���c߇�5�]��Ob��Lq�y��X$�f�F�,���~�����PB��0 jV�������ŋ�����=��s���
r�p�ߠ����W���P ��@+��&;�~:
+Tvn �Z%nЖ�%�Z�%�AL������9�{xqz��ή���VB�|u��戶�����?��������;���@���r�""�ڊmu��z��ڬd�����j�nq��q�gEaa��i_u��-��,�x!*�$~��*v���c}��ij�F��� ��?
endstream
endobj
9 0 obj
<>
endobj
10 0 obj
<>stream
H�j`d �
``Q`hPbp
(0@�@� M�^
endstream
endobj
11 0 obj
<>stream
H��W]l��wfwm�ml�G���چzg�c�Č���b�@3�mh
�'$qBi�m�yi�O�*�YZ�J��PEUU��R��ҷ��Լ��~gf�1'��]������{�wι��^��ڼX-n S9�e�H>��_��������j�B�6������~!Ěw�X�{a>���{�5!��i����������Z���[/�C���ٛB��/��W���?�^ /��s�[�G�B��k��M���HJ�X��W1�G��\�=��*�IJ�-!��5]�O92%�h��0?y�G������cm��^�#_�Y�|�<
�����x��ny�o|��3�?��Q�o�A��&a+ǽ����}�zt�2�Ny�ݢ�~��*�=2��4�FQ.�sV6K�'��*�w'O�&,�ɰuVg�d�j���%�:\Ndt�N4`�d�����5��Q��X5�Pv�7˭��.��*@�T;d�4��7�W��/����Q���z���5���}T����f���E��,���id��)*��46�2�}%lX�M���a�f�G%O��PQS�Т�������
|߷�-jv�$f<�LnMS����~�(3�~Z�����'����_�A�v�<�mR!45�%��C��A0$�S&v7<�梆s��N�k%��̠X��P����
֊����Q/(Y��i?�+?��b��L�S�M��\UI�QՎF�h'$���2���<��[�
Y)qN�4�L &bk���V����R⬶O��d�� .��X�!5v��8 �,Y����D�D�
é������Z�Xн�fa������x�F��<�٠*Ek�< "Dm\�A�-�W;&j�����2V�v7P�@Q;ܖ�������&�~j�y갧�z�ǒF+�����ӎ�Z���]�j��Cj9�~��A��0J^��^��cٶ��ư:��~���->�L��)�>�B ѡ�/�Ğ��2�V�-"ag=Z�U�V�_�F�9*���uR���8{�}2�:s�^��wuCcW.O=v$�\s���\�������쵣���Q�}v���;Z�e��u�S&���*�|�wK��e��K�W��������kI���ɭ��~�He���e�O����ǥ�>.������r���}\n�>.�}\A�[�� ��Ʋ��COnRl���M[r���9l�)�B4u8��`�J����C�f��q��P��]E�"�ܶ�=+q��jGl�̖p�_^����p���e|�M�ѣ�v��ZG�x���,�h�vڅ��<�~�]}B$�TAM� ��T���N]�H�Rvu�������R8NbZ�,Z���+��Xs�~��
�|��N��(�3e��w/��ʺ�L��>i�ph�x��(�>�]>�[)�s�ҸTѝrC8����1!L���'c�&��jr�U0�S�ə��!�`��p�/͊و6�ijv�~�a�����`�znza�����I=ŋr�,���$�&1��.t��֨خZ((3�����.I����hiN��Y����Γ��!��Q`/NR��,ܩj�/Dò�v�c�3V�^�c�j�k��Zp¦ݹ
l����h��1�K��L<��I�s�jl�v^2��5ᮩ�Sz���Ŭ�ϱ1��jY�d���S8���ս��ݹ�����f��Ⴎd��m;��@#��Vh��t���v�i�C��"ܭ&q�ֽu�愦C�G�� % ��]�q�@�r�9S ��ap�9N0��I��B��1��{2i;���f�d�M���%����bt��,�&���dp��dPf�>�9�0�g��08�5t!��ѷb�����۱]�.�v1�����.F��㱥 ^�k4x5�{���㚃ګ�kk�� d�k1G�8�c�K����� �o&����<5�[ d2�mp�,�w3����&�����ad��N�p;�L��K�}?���$��?L �����G dB%�LxϮ���l)cUS�Yğ&�����y2�K���:��>�k�ΰ)���&��4�y^R'�aȳ) yD��L:�ٞ���ud́lG���ɍ����?7>���.� B-v��矍�1���Qa�i�H�错N��)Cd��C"�a��a)z�gz�״475�>���؝Koܱ}��mkw��[����m[Gvn�d�1�m�W�?�x�<k���{y�p1����즟7Oޜ=�7�Wϻ�v����rp��U�� �w�v^T7���q�{Օ�g�a���77�ӘM7�T��ՙ��MXzd��A�7��ן~���o�s�#�e�Z�ڸ��wW�Ii ��8�!`��I$��� c���S��3q�Gj����i&�Ď�Կ��d�f��cܩ�q�NO3��i'��iS�G۵��k{�J�`�����9�;���z��#��=pa�'��̶�X?���8:���1.�}�ƚ��fB �C���.����Z;#V��u /��W�������B�W�{�x8t(�8���������j�0��X��ȫ��|Ȫ�����bY%����bQs�V���ѾȞWFS߉G���ٛj mG���C����c���ϧ2'�E���C}=O�Z'����N�M�E� _��p���"�?W>�/Geʑ�ȏ(5F�I��&QD&5Z�&mGʾ�V@���ks5f��j��Z����rZ+$siI�^�-�r-j���,���yѧ�-9_Hj����V��������?F�Ps�Hi��o*��P�:P\�~ES�E{0��Y_��
� &SSK�?�M�����+V�Z��I�ɻ��A
�x��<0v��b�X�ρZ�07%�N�x�qd{�Y2�+��CrF����]�]���aY^5�
dy����z��fpö�Ju���5ɁL;���D�rㄹ�x��"����vX�$��Rr���%=��L�����:$��}k������E�e�䡄���О���ur�~������@��5�T�����Z��+�Ɗ�V������� �:�%��I1�j�'�i����Z�%c�V ��2�'���#���Q��rÎ�c��LC��K�������紫g�~g����4����X�L�L���˟XO~��#�4�a�F �� ��qtR�V���ȅ�
�C`!�'I�ɤ�V�G�B�f����,������ѐ�i6T/[&��'gbc��^WM2�MaD�S����������J��R�G�J>ts@N%�E
��͜�`2��J���d��P�)c}/�NC`ێ�G���Ʊ��|ʇG}�1u`dQ�/����r]X-
��� �z����1������~[��TZ�ޟ��):t3�;M�gK!_])j�
�s��$-�-G�W�>{���ŕg�ܱ����P��Bb�gp��@�{�=>�qH�vm1���uM������`�궥G�_�Z�ٙp���u�b��Y�n�}߯-��z��T(����ó��4����2�5m�t��S߶�3����ম�ε���3����G|9���� }��m�-3�trl����9��32�s~#��ɤ�V�Q�D��(ro�6Z�ڊ��tc0]ս2{��`�=ty�?�ðNt|��Ԡ�)��I��'ؤ:��8���7�g��,��ot�EZ���,GVERFVEeY���8�M C��x���fꩉ�W[�ۿ?��Q�wYz1�윏\�\��W�PAW�:���ͼf� -�Wyޑ���٢� ~�0
�섂̈́Pܬ��`�x��r�dSՔɟT��WuE���tE����6��.��
�Q O؉�2�k!^��Y �l��:��%I&?ٞ�4��'��&�:�Q��):řL�%�r�qø���: I��V~=t����S�����|A�"܂V��8 ]K�)x��L=:�|F�a�CaQ�,�lK���6���P�ۡ�� >D�@��kK����2�_Ё�M�c\����=�����F����®d��U�\?W>#c`�{ q�0���`
��($���_X��q������ǰ�s
e�p
�d7���lX2���k�A��\�u���@~�ܜ�7K ���Q�/@���"��O!N��A�XI����PN�� �'�]�M�a�n �ct 9ļ! �B�����(��=
"�F����ӛ�Ex�O��i+�wЏ���<����%h'��q�>�7B59�\!�a?��\#�@&���30A�(�� �$���>t�4�9��!\�(w��ʯ�$b/D�H��!�$��&r =C�}�8a^C����'�0
�a~��x>�sP���l�\ I(��,�(���cP���:��4R^�������o��������i"����.KA!�4�Cc4�"��ֲ��(El���-?I�I�n^��@�͓ы:�����n���Z���s��;\2�9\�}��:�]��/C��p���N�4���xl�6�a
Ә��1ʼ�����hb�jA��Zi2���&���3h�^ ������R�y����3B�_��O����}#b�'PG�9�Ls�5�J
�|ǝ�L����� ��Z\Tc�Vߙ��Hk��s����S�'U�D�B�u�uA�j=�r-Q�gmi�${��q�r�IZR;���v]�x��
�Yk.��4���U|6���j�]+�k��Q%�VX/�06�
.�=�+�(���7����^�g¶7�^[_u!o�9���MO\7�
��X��~r�d��Z�oUX7�������$n�4
0 U6
endstream
endobj
12 0 obj
<>
endobj
13 0 obj
<>
endobj
14 0 obj
[13 0 R]
endobj
15 0 obj
<>stream
H�\��n�0��y
_��RB�J\젱= ML�4B�o�`�NZ�D�b��c�?W�Jw��v�5N�vZY���7�w�BP��V�S���|\��}����s�?�q����n���7��v���s��~�=� (l��Kc^����]����杋����
BHp1rP8�F�m��\�U@~u��P��Ppح�ߍu���q�vV�@I7ў(�3EL Ӂ(
��QF
�%Lq@��L����X3�2�f��b��`J�8�Z���T2��qN�����oL8߅kI8_ɚ �+�}IJ�[����M�3�k�x�K�\��t���\Բ�_ Tդ�
endstream
endobj
16 0 obj
<>stream
H�\��n�0��<�����Ͼ7H(R��R�V�� �d�&���ۏ�j�A"|����v���lҏ0�?�S?t�_�[h�9�s?$ya���?ϖ���LI'���_��iL�ڤ?�����aӍG�����������h��m����f����t��~4�[s�&]�=��x���Oq���_�ɛb9�)ӎ��NM�C3�}Rgq[��5n����݊ӎ��O����,���B���W�
�%o�;��B~���@]2�D~Y�pI.��/�_nțȖc,�XK�`Gv`:[8[%+��������������������������������������������Cdvt�r�9�a� G�K�K�)K&ߋ�;
:
��� �
�
�
�
�
�
�
�
�
�*�*�*��J���_��W�+��J���_��W�+��J��{Y��+!�^�yQ,���O<�D�~�[q�,�uY3X-��V�4N&��` ����
endstream
endobj
17 0 obj
<>
endobj
18 0 obj
<>stream
application/postscript
DAT_Final_Logo_None
Vada Ortiz
2020-03-09T15:04:16-07:00
2020-03-09T15:04:16-07:00
2020-03-09T15:04:16-07:00
Adobe Illustrator 24.0 (Macintosh)
xmp.iid:295fdd63-f741-4d46-9066-acfcaca5b4d1
xmp.did:295fdd63-f741-4d46-9066-acfcaca5b4d1
uuid:5D20892493BFDB11914A8590D31508C8
default
xmp.iid:2d74b8c9-c094-4b41-b6d2-30b96844a845
xmp.did:2d74b8c9-c094-4b41-b6d2-30b96844a845
uuid:5D20892493BFDB11914A8590D31508C8
default
saved
xmp.iid:a23358ba-7c43-4ec9-84df-9110a746d4e0
2020-02-06T11:33:44-08:00
Adobe Illustrator 24.0 (Macintosh)
/
saved
xmp.iid:295fdd63-f741-4d46-9066-acfcaca5b4d1
2020-03-09T15:04:16-07:00
Adobe Illustrator 24.0 (Macintosh)
/
Print
Adobe Illustrator
Adobe PDF library 15.00
1
False
False
9.899624
1.569032
Inches
Barlow-Bold
Barlow
Bold
Open Type
Version 1.101
False
Barlow-Bold.ttf
Cyan
Magenta
Yellow
Black
Default Swatch Group
0
White
RGB
PROCESS
255
255
255
Black
RGB
PROCESS
34
31
31
CMYK Red
RGB
PROCESS
234
33
39
CMYK Yellow
RGB
PROCESS
251
237
29
CMYK Green
RGB
PROCESS
0
164
79
CMYK Cyan
RGB
PROCESS
40
170
225
CMYK Blue
RGB
PROCESS
45
54
142
CMYK Magenta
RGB
PROCESS
232
10
137
C=15 M=100 Y=90 K=10
RGB
PROCESS
188
32
46
C=0 M=90 Y=85 K=0
RGB
PROCESS
238
64
54
C=0 M=80 Y=95 K=0
RGB
PROCESS
240
90
42
C=0 M=50 Y=100 K=0
RGB
PROCESS
245
145
32
C=0 M=35 Y=85 K=0
RGB
PROCESS
249
174
66
C=5 M=0 Y=90 K=0
RGB
PROCESS
247
234
47
C=20 M=0 Y=100 K=0
RGB
PROCESS
213
221
38
C=50 M=0 Y=100 K=0
RGB
PROCESS
138
196
64
C=75 M=0 Y=100 K=0
RGB
PROCESS
54
178
74
C=85 M=10 Y=100 K=10
RGB
PROCESS
0
146
71
C=90 M=30 Y=95 K=30
RGB
PROCESS
1
105
56
C=75 M=0 Y=75 K=0
RGB
PROCESS
39
179
115
C=80 M=10 Y=45 K=0
RGB
PROCESS
0
165
155
C=70 M=15 Y=0 K=0
RGB
PROCESS
38
167
223
C=85 M=50 Y=0 K=0
RGB
PROCESS
27
117
186
C=100 M=95 Y=5 K=0
RGB
PROCESS
42
59
142
C=100 M=100 Y=25 K=25
RGB
PROCESS
40
37
96
C=75 M=100 Y=0 K=0
RGB
PROCESS
101
47
142
C=50 M=100 Y=0 K=0
RGB
PROCESS
143
40
140
C=35 M=100 Y=35 K=10
RGB
PROCESS
157
32
99
C=10 M=100 Y=50 K=0
RGB
PROCESS
216
27
93
C=0 M=95 Y=20 K=0
RGB
PROCESS
234
43
123
C=25 M=25 Y=40 K=0
RGB
PROCESS
192
179
153
C=40 M=45 Y=50 K=5
RGB
PROCESS
153
131
121
C=50 M=50 Y=60 K=25
RGB
PROCESS
113
101
88
C=55 M=60 Y=65 K=40
RGB
PROCESS
91
74
66
C=25 M=40 Y=65 K=0
RGB
PROCESS
193
151
107
C=30 M=50 Y=75 K=10
RGB
PROCESS
167
124
79
C=35 M=60 Y=80 K=25
RGB
PROCESS
137
93
59
C=40 M=65 Y=90 K=35
RGB
PROCESS
117
77
41
C=40 M=70 Y=100 K=50
RGB
PROCESS
96
57
23
C=50 M=70 Y=80 K=70
RGB
PROCESS
58
36
22
Grays
1
C=0 M=0 Y=0 K=100
RGB
PROCESS
34
31
31
C=0 M=0 Y=0 K=90
RGB
PROCESS
65
65
65
C=0 M=0 Y=0 K=80
RGB
PROCESS
89
89
92
C=0 M=0 Y=0 K=70
RGB
PROCESS
109
111
112
C=0 M=0 Y=0 K=60
RGB
PROCESS
128
128
131
C=0 M=0 Y=0 K=50
RGB
PROCESS
145
147
149
C=0 M=0 Y=0 K=40
RGB
PROCESS
166
167
170
C=0 M=0 Y=0 K=30
RGB
PROCESS
186
188
190
C=0 M=0 Y=0 K=20
RGB
PROCESS
207
209
210
C=0 M=0 Y=0 K=10
RGB
PROCESS
229
230
230
C=0 M=0 Y=0 K=5
RGB
PROCESS
241
241
241
Brights
1
C=0 M=100 Y=100 K=0
RGB
PROCESS
234
33
39
C=0 M=75 Y=100 K=0
RGB
PROCESS
239
102
35
C=0 M=10 Y=95 K=0
RGB
PROCESS
253
220
13
C=85 M=10 Y=100 K=0
RGB
PROCESS
0
160
74
C=100 M=90 Y=0 K=0
RGB
PROCESS
32
68
150
C=60 M=90 Y=0 K=0
RGB
PROCESS
128
64
150
endstream
endobj
19 0 obj
<>/Properties<>>>/Subtype/Form>>stream
H��Wˎ\5��W�掫���H$D���"j^��=�T�}oO4b�4�n���\u|�|���w�_>?��>z���U[+1���đ��lE[
%m}����$�~���7��>��q8�~����o��}����p��q���>b��~h1|�}�E��z�%�W�I(q��⻎���G�
���[�~8�M����i���({�� S���I��-�7M��%)!��&o��a�˜�՜�
���xE��mk�6����F��_�^�>;�r*�,K0�ɦ���h��ӗ�'l�hem�>�A���oj�FR�aQ����W���\�Ü ��Χ��&��9�.�n~�O�_�ï:n��������̵���a�z'�1���y�.�L���m�m�%��[�V�Ҡ-M��pT�hg�^m��(t�H*ީ��C/} `U���,�9�A2���t�]}��Z�ļB����)n��{�3��[_���M�v^��Mu��o�LQ�g���@:�0q�/�W+�;f��\�x!U�j��2��\q
��ŝc��N���y��2G��Oۅm4gx�!n��*R�JQ�z���'J�Q�yb���|M�x ��yZ�hW<�|�I����C����ǔ��g��+L��>7x^9�7Y��:�;�Up!��:H8ý�1f�f���7�en�:�����;K���+�B�`aqȌ+�`����y�S�{�"��Hh6ٞ�����,7��K)���(�->�匞e;[(H�(�GK�2��`�.�R�]���+�5\�|���\q)X��흆�y�Չ�@ p`���4�Y�2�iX�8X�dsK��ąE�&��bcF�ي�������B��ho�Խ�g������f'�5��"i�#�d�A����{O�}�h>`ܟ�L���ޅ�Z͒��Z�K��QPƾ����|H�xji���e�X��X4��y`y\
����)w�;���u����YNP�c�B�6[L��{6a��LIF�WT���B�j��5<8��Q?���3�Q2-d�R/�R�+��K^XA����d�*��#�D��6�`I�Y�NC7&�̀�}9�!�b���k.Ñ�/��F]���Md$�UZ�JP������ �5p�fkq]��;�r�9�=�B��v��6+%(�eE�&R��Hv<�
-���N�� �ԍ6�ζ��!!���y'ྜ�.0�uT��m�qfXN��
�*;hc
>!.�� �-;#�� ��LH6��D���
.J*��l�h���d���kBaI���y3�k[����-{�)�N�"Y�:�ЗA��3�ʸ2�ᛰ>ȕ6�,��3�eه����`j&�]�:�楒�t$֥%� ��j�ج���*���D�^Eȧ��5=r�$tk7�h$�GE��Vt�|t�,ih^9[Ŭu�w��IԈT��(۩����W9�Մ�6pr����L89Ս/�������0{����l^=���q-�#su��́%�8t7�aY����:~R[z`��̧�\���VD��=��ɳ�_���Պ�
�~���\ ���a�w��(ɔ��P�܀
�}��` ��4�
endstream
endobj
20 0 obj
<>stream
H��W�oW���^'�l|I�I�3=^'��n�8q��M73^ob7���Τ���-��mZJpKKSJK�J �x���&�"�OBH���@�H���P�B���u�6N��-���s~统�I�F�F&��<�g�˥�`�]���W�����N$��|����ҭ�1��{���/,]�e�D=o��ǥ����?��D����_�B�o�2�?�|����+Ǻ�4��e�]~f����7�@�o��|d��r��������������_1�AdVf�Qo��Wn
����o}tk��7�!���HM��J ��;�}jVg��>K��%�8�k#_�uu�����l[S��SSM�nQG�p��
G��.jӑ���r=���0t#��s���i�;�"�&��t���4b������f��p�K@�6�D�����4�+L9�,��g}z��ZK.H�^M���7w�.�2_љ�ok3�~��j�R�jX*��'M��6,ڍ��L�Q�e�{5_"��ԝ5?Ċ�NF��C+��B��&o^�i_�,�m̭Y������;�4όw�4�@�B�<��QnP�iGT���^��Y���"8u&7"!��+y�ݵ��S�ae^�Gllz�!�����)?�Y�Ӂ�;��|�Ǟ�qi�R�YGo�
M2�4w`�\�rQn]sKZ���)�
�dk��V��$K��0`J8[��47t�WqG������^H�)� <��JC�9�q���hi�ȶ�H��O%*��9��p��O][{�ۉ���
�X�FP�=Nd�P�*�^T)u�7� �!�˳Ә����APo��C��y�l�R�����̞����T0���JQ�9����3ɢec�/^�w"���͛=-���ʡ�ܨ�?z�� ra�k~�ჿn���[�X[�>�M� �Ta��'k�FD}
��4i
!�l
8�Q9���ʕݍ��R(9W��ںUP���u]�@?�D=��(�
�}� |(�'���ߕa��{�)��3�8��H�M��[��8��?�|?�%���u����^���[7uDMD�� �:�x��;ۏ�R�(�Ni�dQO|�=�A���$����FUU�C|<|�x�&��G��w
�-���N�1-�D���%%�d2�N��D��(�͖:�R>�_Oɴ�������;m'���O��Pg��^א�]�*��pA�4Ul���r���:LC�W�ȱ��i~�:�X��A�JzjM�H��ҟ�
�lD��0��ꤟ�B!L�c!��n�BM"L�n��xZUY)g��jٙ$Қ��%9���o-J���
��cv|�w�$�w��V���Ck,���
��g]n����Q�(N�-�_���ɠ��~�ۣ�ힶj���w<{����p�ч
��5�֥"�%=���e���$�u|As@�O 7/�?�D�xk�G�ǒ�����}�>6�Ъ�ԋ�쬢*��r��[����f5�������nx_I��Ϭ�>q��O ~��1��(Vn9����G.h}��$������WN�+g�S8����Q�0x̹�^��@"F�s]$k灒�Ǚ'}�y1z�y1z�y1��:+ !�dPg��X'�y�X`�E�0Xb���]S@�b�}9���Wb�=���rl���.FO�v1z1�\M��x�ˀ�&�(�s�x�b�U��-�� d��b�hq�����R���+ �/$��/BN���2�2�ep����f<��$�����Wq�E�V��Z��mpZ��z<��o$���I ����� dB#�Lx�in�����Lf�4�
nAw,js���~����?����;�����.��4ȼH��cd�B
H�$�f�)��\:��0��sy;gψ�|[�o~b������$����/Bb�:�)��)R$�$��K���Dڄ���
�es�\��j��=0f�F DP�����?~�e߸�8���(@z�������
"m@�A锑�H�Ib%�J��R��ֶ���q���v�c{�3ٝ>3��;�߰R�6���L}��خr~���_��;V�/�U��y��������`lcl���l����}�mL.
$�4&��-i)Q�uͤ��EѾ��R�nZ�I�ԖU���"-��c�֭�VuS~L�USU_v�{m �F������<�s�M_�>�����&�zuϨ?|@��}a�twx�@�D���:�-����e7 ?�U]��h�EDK�ō�Av��5:�F��خ������ �"����x�.�3���CvQ�������+ųW��˅�X�T��\�`��{�Ƌ�7�_*N�.��]J��/�!���u���5�+��ª�g+�A��g�N�� ����
����_�t�\(t��%�����G�/L�\L��)��H�b'�� �P�NgX���Y�
��藚o��2W~e�O�8!��d3� �E���֠��xl�Tv
�A�L�Ɲ��~[z���@�ϓү+�� ��B�k�*���̀�1��@��uj�Ƚk��xR��+��J�gEF�f�mW�]إyv��a��텮�r�ۓS�V�|ϲTslr|-?1vj��nW{N@�WWߜ_;�Ȥ6fg7R��
I�_�
��#rWh�1R1XU$��Ʊ�a���ݠ2n���j� ]!��m.%��Q9*���
b��R�l��!8OULO4m=�o� =��3��d�i��TE�<�b��Y ���:
F��q�|��`��P�_���S*-���)�g�C�)�;�;�i��5T�U����@n�
+g��P����(���<#X�
l�H�tF��L1�d�H1nƐ9��˅��� ����$)�o��\�BQ�o������*f;0���RO4�v ��C���!���t��i���Y�@��rE���M�j����)�+�ǁ��X�g�Dž��o�QH�o���������1^M��]�� b�ն�˖o�-�Zo����N�n�.�5�������J0����M����gƦ^������K�N������ �NЫ����ʪ�T
�^��%��K�Ҷ��Z�K,���*(�
���~����Q����(�w���gR8<1q�0rF�v �����J#g��ƭ-�'2��5����IY
�d�W�"C� �A
̟~5w��7������a���M2+�I���G�� ����KdC���5bN]�_�2�\����հ���m\�9-�����G�������lǨ�ioono��M��j�dö*�h�e��S��N@�~�t��_����/�Z��7��f��cYּ?ړj�J&֯�t�&����@$�=9/ݙ�
���:����4NA���.2��Y�9X���C���2���1� ����|���9��C����3adk4��\�*yw3b#����-Ǡ����[�呃�
|�`�MeN�b'�����\P�����]���{�g����@���P>���@]U��gm?m���ӑ>=8x:=s�v�*�e��3�P�jZ^7��ϯΏw�B<^���"ś�ճ�w��Ix��S�dǻ�<��x�k�uP��ەP+�>oO�d,~"[l,�x��@p����O�s�����=��ԥ��
�"QCw��H�ǻӆ�EI��Dא&��i�P @����5����l&e����������ɱ�c}XZg���H�T�陽j��L�_���i�k�����������p�a�tCL�n�0dIƖ���� ���u�&' ���.(��4S���P6;�
�yZ�o��G�Fm��t��m[VR��P%q^46�;r
V� 2
f���k1W/�:��TSԃeQi��F���i:I�)����=R�)˪�O=����a D�c�ӂ~6̊<����3<"�oYr�k��D�JJ�$�R�y����hV��:ӿ%U[
&��zγ�S )�)��byG'��D�n�N�g�OZ���$�l�$�a��F�R�LdS�ә<p��rN���x%��(zU����Ke�'�$�5�H�(I~H�5>(�b�6�4:f�.:���G�͉f����2%z>�'��������c
�Xm5U*��;�Kz��V�-.N�@I����F���>�cR����#�Kc�GTcr��������!{�ir�*���3��5�LF���@{_
���$sثj�'�b����c�9,��),R�d�v���w�(R1����sf�g*E�mL�`Ikӑ�����瞳y���D;u*���6��q��I60F.�J���L�]�<+6��]�� >�g%���tTZ�b^!v+���z:*.?�Z?�31>��Ϫ=B,���/H��7hM%��w�FA;O�;�����k�T(@Q���*��X/�pq���M^Ě�D�#�\��3���/4ݣ/l��_>7~���s�(�6ҧ�Y�R�b� ]�Oi%���F��;��A�Ưrld��o�@U�a �ǒF�@N�6�r�{���?�D�����w�x����7�5fAD���ϝD�b��vH�'Ew(9���z<1��
�ތSuQ��YU�*J�H�V
�+��xǯ��Xȱ~�sj]~9���Ҟ�0+?B+m�MtO ��!�����a��uau����������ssk���\ג߿�^��XTϾ���Ҭ���>93�����<�Ѫk�E)�8�=j���R���>$�g�~9��W��Dh��ѭb�� ^���cb�*����BK�lA�Tk��
`�[��YJ����V�?]�.Ni�u���Tux\a�����s���d4���VZ��=��{p9���}�~jK+��.�� ��;�
���2d3� �x4� Қr ճE<��,��M�5Uz��t;�N��6T9n�Y1c��Xi
sRL�bdH}�J7AF�=���uu�e��;2�ã�eu˾���h�%l��;�]Qo`��3��J��\
�?�ӄѱ�/T^IY��n�m�E��)G+����n4s˴ŭ�Y8C �L�ې��������Gw�e�NX��`'67@k֙���$�C��Q�T_k�>�Վt�#������G����`0�i^�6�Ձ?l�g���E� ���ЈQ�w��f�=�c���n��>�hw����fr�9���bf�����q,Ih1��,���y�bx�`p䐬�`"t�7��gp�� <�םhnN�݉�9�7�����;��H$� ,�
u?��ot���+6A�ґ��,p��jT�H�(O��$��T�|���f��\���
��|>�'K/��g� =���Q�(�`*u��ed�d�[�$����\�a.�~�N�zη�Z\�0��A._�B�q�O�� Y�s�p�98�PvN7~��.F���)�l̦'�&�}x��3�X�nǨ��Qw@+� ���k������(;e�hS�A�:�ދ����m���y��_����s"����ن�c�u�w�� ���t��������Ȑ�+�#�Lj�o�5?���k�_�ȾOFq�z�r�m(�Ϡ�w��p?�C��3?�<�Q�F>��~ �F�<�\��f��Q���� 7A�5c<�������_�}� ~�M��X�_�.����{'ts��sP�=w�9��>���x�$���w��$pPMzp�W3�+���b���8�=�^��4�`��B--�l@2I��JA
�m��@ V��EM���'����&�.��.qY���`6E�1�h�,c�{�e�u�d��|{~���sO����.��~��~������(��L_�Ő���t�e�B�>vjA�I�I��M�O)lp�j��B�PC2FC�!y��_R�QI�}I~-,q�]f?��j%LK�V���)��O(/��j��Ϛ�\))�5J�1�5;�E��L�F�����2���l�qb�x��M��:�I����d����i�ez��\��5�V���/ӿ?w���ʱ�i�� j�^�������=�W9G�4��|��!����R��o8V��D�h�>�6[i�A��;H͢�s�o��ƍ/��^~��p/��XN��&�#�s0�8�EF���CE�b!��M��y�?��_0�7Ml;�O[\�����"�Yw�Z�pSNf�c�G3Y\�x�W��3��
��뮉{�{������IѰŝ<�9v��ZM�
��N����������m������<ϛ��n�b��"a��G,[�}βm�o�v����j�]��c��=���F���rc�f���k���4p��M�<4��k�Y���Xc�i�f�)��d���f���ae��IY5��Q>�� .QL�d�=��&�$���u�1շn��Pw�a������$� ���O3�6�3>�Z�$-���V����E]���6�U_���QF�oy��_�ܛ��=������U�H���2�2�9�+%��4����V&��,8m��"�%��������t������0��1��f �xe�g��9���X�0��:Ye+�Zq #VO����+�6D� 0 �e��
endstream
endobj
21 0 obj
<>stream
H��U{xT����̓��ː
˹{��wC@y�ao�DVa ���n6 Q�4� ���������ZJm{6T�
6���-��V���EM���'_�~��93 Y�
B�N����� ����H�3�u��� � 8�b�eю��@�I[����fx3�:��\�zS�KU�9@a=0.��m-�@�L�c���
::�����,3��������Ƣ�[�s���Ǽ�hO�#/��1��rM�����v�m� Vu���=�&����k}[g�h̯b�0�B�.���W�YLRZ��T�Q�V`b=dp�o�?���:+C��X��N��::#��1f�k#� �s'Ӎ�+u��DL�J#�4���&ZFm�����c��xU��p:���*ǀ{�����d�,�n鑕�VN�
�[����Z���*���zr<��"O����z�<O[��bx8QI�ZL�� 7UQM�khu$�~$J��Qq�G�#�1��nur����,�%�Lʄ�i��w��V�K��4�=���x>ym���=J�x������C�=}��ԡ�CO��w�ЍC��f�W
�.�6���Y��gS)�z7������"_��J� ZD�}��_��b��'�Ł����#�:"����迀ۛ�t5i4���Gkh=�� M�%4�&S]ƨ��崔�纯�(���\
tL@w���f`&f���|,������X����j��;��k�6a��}x_ė�_�7�����y|?«8��~���-��?�TC�h"��\�p�ҵ��n�nz�n&']I1��6ң�C*�b�I�iUp�R
͢R�][�T8���<� n���2��2xa��{}�h�\(��т� �.\��bn�m���
�q/�.����:����,v�E|/��/��W�6wѯ�k�G�W��v��[��㙃���^���~������0�r����_F%�B��$��)܋j<�3�
��UL��x:�����
(��~����pb^@~ �Јq!��f��8�E�1��'X�7�'�?�J����B;�۳
�b~�u����O��_�<��Mq'�Û���]� ��s�8=F}����t'�C��]��n��i;�@'��M�V���6�o���4��i4��ҭ��*�&��n����=%���o3�� M��&%��������h����{q�"�.S0���p��2x��*u�Ұ��RUV�]�.+Q}1�K��-.MS�L���'�}�%#�>%]�5�r�u���~S�2��E��W8L%�E=Re�0�����B�1Jk+�liv��$/u�NW�۬�b �NC9�����T�i�3p�V�B�Y������@,�RaM9*��ea6v���
�XT��j���jY2��截Xt������ږ��d4z�Re���H[�a���U]��,��h�L3��V���r�+w0z01��`
Z,�5j)�Zֹ,����~˧R�8+��S�
�4ݯ�u?W��D|*57#![�i-~i+�t]����rD1�R��Ҕ���}�kS*��H�m�º�YR�/��e�r.�J3����WG�tfu������J��+�q *�ڧ�Ҏ6��r�E�'���e�D�f�c�a����ƹĸ��2����C09����Q�� ��������K�G�.�.�]��.�>Mm��l#����L8�ťkV57q�"�Z�
>�k���*ǜk���ʵ�f�r���r�H� ƞU�����6��7���qgk�U�����*0����I�KcyAB^hđo.���MEQ����#ǭ��؟\�(*�Z8*B�
����
���jM�m#kWRoo�I�%g���7���b]��q�@g�L�Y�D��V���,
�|�/*��/K����%%�3���6���h�0ݫ��<W1�X���K�8ٴ���X#�i�w��e�SlZf�Sm:Έ���m��m:ވ�����G�W�FZ�5�����S�(e�y庤�7JYy^�>��T���yrR�'S������I���?֫.����9>��O N�H��9�b��8 �lj�����F�3xA4B'�jh� :ϰ�/��-�%���b�9֣aT�ҋg��3��͗���m�y43/f4�!���6�/����3��9^^�o��ݸ$)�Q������ֈ��'G���v�or�d@<�M�8@�8�A{�C`��fah�4�ōg���g��� MO�i� �"�:@{FҴ'��I� �I`�8; ��a��%�9�W����Q��c�/B��/B/ �}S�E������qO��'ĝ�� (��nw'q֚�WH�S��M�i,��[=#�Ċ���H�oÎI����]�<�}y{ĝ��j@�/����$�f@"\2 ^�?o�{�N��0 ��4 ���J�pـDX2 ���,o��Zy�"�Q|i����ú\7v6wXo��~���efeM��S��<�$Υ�L��e̪*�d��:}A��^�{����\��[���PA3�{�/�����P�dE��E�Xd�r��2�@�If���
�hl���M*֯+.*�2wYm>����h���U85�W�UUGyEE��skPU���w��ߝ;{�1�~&��Do���ӎ+����4m����h��}����fD*]F�6��6���
;�p���(�����^T �գ�(L��(W�l���G��>��Ï.]�=��~n��t����?���A2EF�VĚC~�+�s�9��ŢP����V��~��JJ�#�/����x�S>-݉ޏ|�پ�ls���m�[,.�-`dU�mzd՞�d?�Z�ϰ�U$�3��1�U�.�vs�wꎱ (A�UYR��ک�����$Q�ʐ�9L��e�mF�'CN�j�]U�
^�k�_p��c�P�38���#��̬;:�\��;>tb��;�NZ��Q����S�������襅�����A3s��B�C"YR��BԲQm{"EJ6WH-��ސVX�"#�Xk�%IrIfKK�m�X�k�fxݱ�n��ˈt'����#��ڝ�W��_¶�jBU�,�ei��*L��-�
�(Ԑ�1��f�X++X[�!_�M���0b��E�=Œw����KM!�ʂv�.k�=q���3�{������������OW�0#�*�ylu��Ia��9��8E}*�ʾ
W�J��ٺr~���
����%��H
�e��
�z�6:���k �^�>.
jC�!ׅ�:�[sW��:w��=W�$O
�J������c
L�"�'h,F��4�IP#2�`�P#r�%�ϑPs}dF�(+y�R���F-L?4�L�`�m�X��������S�Hw:��uOm��������ȧ�;����1���'���E� #�4�VM�Z��j��І��̛�1h�DG{@j���H�S��xEJ��y�7'�&�C���Q߮�9 �l�ս���TO�ؖ���]�7G7G�MA�����ᮽ�m]��:�ӱ�����ex{�d�p!keȚ�yB���PDh�˲K&IX�3ǐz֡���9��s�L4�r����aX(#]ɜ�U�d]Q���2^�����E�V|ǩX���h:;]�}v����NM��O�����e�Ç��Du1�T�?$.���sU���ᔩ1��Ɛ���}W~:}�}�I�� �~�rC��#��ʯ���������b���ۛ�j��L(I"%��8-���d�@V����,�䇦��T(�e��S����Y���Xݸq��l}�E6�U�B��i4I0Hr�U�Z�D����a:�Ҹ�7��ʡ7"ݭ�=J��G��T���D�^M�v7�6������D��Sٻ�M��mc~�64� E�"��Y)N��_`0<:.\�:�s}��i�s5��ϡ�IR�GCygU��j�k���F)/�����i�:���{}}�m�6�6`�k�
6�oc0����mc�� I�@B�F͒l+����}i�N�2�Һ�J�VMJ��j�&m��uj�=M�:i��2i�iQ7̾s�5��eE�ڲ}���������I��V:� �C��u_,��3�룣7��Ł�b�<5a�<�Y�O=���
���@yh�������W���K���X��7��٥;���Rԓ���h���rW��-�v���O9�,{#��y�7�+x�w��ix��IP2��e��[�����L0�����+c4y�0z��\��+#�i�w:�O�s؇"���D�R^-�w%P�HAS�,���r�eՇ
VC
��[�N�K0�p@��CuZ%����Qg���d���7�r�C����}������Kg֔h��hҷf��WF����W�?��~359^��0B�����1�I�UĶ��r��ìb �h@x���`�t�w�^t�B.�uu߽�>�Vrc9{o9[]�.���L7.j��@%Q���Y�P�Ԙ�V!F�}�qM0>qԶ��L�J�I�T�x��)Х�VXk\"��B��Ce۟�X�ő�� ��G�gmrbi?,
�^
���O](�����h�������2���/����p8���q:{����y4P$���F�d��*&�r�cq�vL���-�B�3��D]n6�*�3��C/T���ݮ]�/���_�Y��X*L�ΕS�Bt2����#u�W��R�[
;��]i�ʨ5��s�ـ&�z��G#�x$�~
��,˵G {��`�0�sb�d��l�ǼTg����ap�̞D���(NF�)[�
V�濛^:���z��Wdy�V�BW&a���Ij�iper�
���(�F�(M�C���H�]��A�LJ��m� X��X�>o������h��
��㰇zD�1QQ1��;�a��0�L%��a&M�A�"�l<.v�
�߁#�fA���+�s����{h2Z<�UnLB} �7�
l=q
S��u��j����� �9S����'W��|���>B�ٕzv���g��2��A��V0�vU�H��[��v��4�,�i^�`�b�?�ߢM�{���"`[9���ǩS�V|�I"��B e}�&S�̳�S�jl��|2qf��L�̠�\�@0�J%2]�5R{�U%Į������B7�Ƴ���.;E��t�.��SEkj,ڢ¼2��]���r`��he-�Hf��o`�pO���9)=�y�/J#�����3G$�#=�G�'2j�#�I��P���RL� ��0C4��tB�O�fYlk3P�fc�\N��TíFnn��!�����`�������j�(�~�,'c�C�rjA
9ՊQ��ĉ�0w4���ߜ���r��#�&5��P�
=6�L��X=�s!���8���hCV�AVZj
jX@NIOf��٦��4�&9��eQ_h)�`���#�И)*M�3��� �_%�����=0A�Nv?g�m��:���9����-�Mw��&O~Z�D�z�.kV���>�Ϧ�mq_z~f��u�m���Ã�/&Z�/����a\�Wp�4�r�0�l`�f�q�[�u��'�9�' ll��b�@R#F��(U*꩷�r�c�����+ �0B @��h�a�Q+�-�C��45*jf�$ʲV�+E����L���4LpҴD�B�
ln�^�*��^(9Ӫ��
�~ ���
|ј!�� �����I�<�,��u,o�F�,���㥷P�4o��u��B�v9=�ڟ1�F����,o�Z��Zx��q�z�J�[�;��xh"�쯦\��I�"�8�:QSSj�^O�5�@[)MN�������o�<|����O�|�)��=۠�Ͱ'OyD7�J��e�h�9+51��S�^G��2��F��Y����.V�iЯ��C�?�®N�ㇰ�
;j�
�Gc
F�|�̺�b�R�uR�1$�#�N�V�
]��Y2��d��`�* jp��C�8�5X]{-�{mm�~.wmt3�Mn
mhV���yUz�3����e2{++�d �ܹ�b������A^��X�u����Z)��i�'K2GA8��mʹR0��/_��I������ɍk472��w½�����Jy�=��#�4C,>�$��
�T���=�ԈޣX�� ������R;�5�]��A��L:ʇ|��똉\Ng,Rs�Jg��UJ����4�����B`&�NL��IL�L|#5�H$�s�-M��ԙŹD(i�&�����R(8�-�h�� �W!�AFS�)8��� ̐��R�8v�ANE�%���s�}�>��EK��![C2�z2�虓U�Bc��\�������#W�Fv
ŭ�U�s6��yϚ&�N��n 3tibdgbx�v���.��ޜǗ���������C=��A�v�g��a�iUr���_5�{�mAP���j�i�:��9�����m: �lS�� ��JhF�]�������ej3�vZ�)��i�V�RC�*�˴t�����Z��F�m��4)Q�!E
����kp�0�����s�9�9��s��?�E�
+<�z�va�exM�� qTGc�Pq�yM0���d8Ѩ�e��N�?F �M���n�0�~�=�g�� /�}��V��3s
J�YrP�z�q�6���]L�TBU%���
3�7+{��p3��9V1vjϞSc��Ϟ|�n�� �[f�+�$��<�Ҹ�$neUl���*�
z�tT�T��X�R�2�=f�zҲ��e@Y���,�>�x#�%��������p �"��3�l$�W�F�l�����W�l�e��/����厴�=%F�d�^{�Net؛R��2/)�ϖ�2WC��p�@��Ȝ�#�kit"L*QZ�Eȴ�m����ω��Ec��Q�ú&w��dLKI&;���fU��-���h�Q���N+"<�I9L��W�io�[��^������Z\��?������?��V��;Xtѷ6�ݷ.0�k���]ŮRkn��r������ܶ�?�k+,� �U��X/�3V"�LT��&�q�C���O�%
���Z�Yd�A!Iӧ�{дp�0��tcڊ�yfҒ���&X�.n|�������l}ǁ��:v�t(t�uOw3���]e��O��a��M�p�C���X������/�v�R�)��Gz�S�|�yc���-�#]P�.jE/�ЛDr?���=�����R�V.��s�4!�F���d'P��ge�ri�a#���&�R������>�w���'��e���!h�C����VPɒ4�MHy�T�>�d�~�X�ȹ�7Ю�rn��ȿl'�4s>���).30n,s�l9��E�G���P������S����m"3���6���t���cH������o��ӵ���z�
�I
����
�*�X7��-J������a+�7�CH�}�=T���a�X��1�8��w1fͲ�@w��ɮS:wb=㐽A��T�f�>��?>�UX��9�c�����J%̅�^Kk�M�� /�Jn��*�G��wPo�
�ަlքo�D�l���^)���o~��t�2e�rțT/�e|7���
�E�1�#��ž
� ��#��r�)V�rD�r� ���N�[�A���"Og[�a�W�E�>�`�
r�e�i��&+=
����V�5��_��%�S��O�90�:q�%e�M�|�첏��w�=���6���]�L
�O�6�����
߮r�������(��_�c�4��%�G���F�m�Z�$
��';/$�4M�_�ƛ�>��W&��TH2�X�l�m��T�H���X�;�����:p�*}�K�R���o���+i�¤��n�C#p5ep]wS�Z�6�~ځ6�!si3z�Nh�����lu�6{QڅZ��E���=v���E�(��҃h9��R��qg܉��Q�+sk��ed�*sR�9�����O5� ��~�b�^���au�m��:�O�P�]�Y��e7f�5��J���e��"�=|��q����9��w�N�3z�S��8�|�"�[J����L�Q�#%��Z�RJ��M<�='��0I(|g���|F�_Î��y�{�S�w����<��H����m��5�#ذ�#����ﴓgkM\J)�`�'N�G�9�Oivh���� �
endstream
endobj
22 0 obj
<>stream
H��Ux����fb�/�;;I���#o�N� QX����M�JH��AE�GDEE��*��RJۻKEl�RK[kik�#��nik��4=�K0|�|�~3�ǽ���s ��"8�e���O��`ɻ������ێwLh*��]�#��u^ ������+:[k7� @N֊�ڏ�]�E�@�H��G��ؾ����7����;:{z�흱���|Ʀ�]���O�]T��s:#��j~n>P#�^��t����d��|evw�ڞ�g0�����kںO<�Pc~��P�)�ƶ}J�%M)J�0��1�����h���m�@�E���2@u�c��N��c��6�Yp8�2]�z�WLj��$�N��"�.��h-�6���G�cťX�k�{�C�P��*��{��������"1J��GT�Z1Y4��[���BͣUj5����z
<ŞQ���� {�*��Q����d�LrS��$���QG��Y�������Bu���X���}��/�G�P��2!�~']�o�~c�i:{�%>�������t�0Y�㫇֧��ߩ��'�����������ڿ�����?���g�w�wR�~�Q)}�d(J�R�4(�J�}��S��Vv){���?)ۯV^Q���rT9�Y�(���)JאF3�9�����XZH�i<��e��ZJ�h �}5G���Rx�c,��[��������k1s�Kс�
]XG�T���=�
{� ��v�����[�^�w�#���8�����o����@㨕f��k�B��|��z�ZG�FQ������Ka*��J+i-Up�QM�Q4���N(p ��y.A>�(�e��e��@�������A+�!��q~��qn�flĝ��c��m���,���/������2Yx�p�
�Ɵ�*��t���u+2xL�~��LNj����A4�0�pA��f�<�|���� ��X��X��a�{X��ю~D��l��� ��j�k�V�z�O܌�|S܍��&��納�y.�����ӝ�����~��6�t/�E[Љp#~���=4<���+w��h
M�{eM��t+m�J�����|�}�S���i�6c~3���e�kʛ�d�\9:�!��~8X>d��0�K$!� ��h:��yM�ٲ($'�d�n}BR�������F�V��IX�ސ��0�~�$C�p�O*���O����S���7e�)�a\)2��
Ք�9�W�,�f$&�ބ�(|����4[��)&���OR!�t�`��J���t�0���f��'KL�KĄ<���E�*�6рtB�T+���!6v���YT��r���hY"��截Xt�����ږ��!�h�E����,�.�^�٫��+lY��ђYfT�9$�dk̻���^��"��-��ղbK�ײ�e`���-�L3GਈpN�f0$�u����\��IgnFB���~a+�t]����TÁ�L��Xi�>�Ǿ�i�мP8�4[!��,!�[B�sٸ��'�
9��&��H�����]tD*�풢�L��������h� �>l�&�d��FbD̀�Z;�8�6RV��r&��>=b5 6\vA�pq�CQri�HC�E�E��r�ק�
ߔc$ڗ�5�^\�fUs�qE �X��'�6B暳�x��y6��\^�^�|P^�Dٳ�7â/,d>��F��P�k��ev��듅FӼPSKJ��X^��q�B�SR�/���qk���'�?�J�jE0���|�}\av�W��mh�J��-<ɶ��L9�F�^X���0ꌗ)1=AD�j|�*��!Y��E@�p�e��r~~����o�~��F��u�ex�^���*���>y�'��2�6i�U��2����x�Mˌ�Ӧ��x�M�F<æc���z
}�3Һ���Ԟ�4�)K�+W���a����5)�0 s�͓�z!������4�Op\�Ϧ��Y�����3|�}�C0`C0fl.�ߘ�y���� B �$3�I �b�44 If��Q�"�M��j�n.��.��TjG�M�*u�E7�F]Uj���&�����~��������q�]k]�����G� �k-�k�k=�kX�v۠a��Y/��M)E��f5�!�7���
��7dS�t�4��e�)��\�7��8��~S�)E�2�C�7qZ4o��y+������D���=g�ό�.٫vl�p'��=����Y2��v-\�}H��OT�Y�;�"��{��A �vhccP���������sg)���r�e�I�~��igq=?:�V���
،�J�M{���sl�>K3%vl�䕽�gR@���Ӥ�ah��
55�+���:K��<����9U�q�ⵔȸ�gi��^��k��j
9V�C�N,[�����D5g��!�d�(8�5��HN�� ��I�r/BwN�5��@V�2�l��m���:H�R{�%�`L�u61�v�@'���W6������o3�{U{6[*�|�O�t����s)�a~�IŔ^��s�L�vO�7y)�������W���\�ߊ��w���0����|�CPo�"�a�+���>��|���Щ@U�O�g�Oi�6�5�%�eI����b�9֭bT���t��A�PN�ø��|jV�l4�A����A����6t��7<�9^Z���hz'.iRq r{S8xsjը��4ਆ�)�1 N����O��'lj30A��!p�8Nj�0@S@�@��3n>;d>;MD>E���L�\8���H@|�1�"K��Y)E>����%���>���?��/���X,��Пâ�٘��q��&��$\��ʴ,º�e,�
���p(փ��p5賔D�2ξ��G[_w����~�+\�����~&���Y4����u&K�,�f��3���4�$�QP�*;���rWiQ�=?����m��Bru ����iv����Z�8K]��������.�~\�I�d*���IM|R��~�'��?=s�jg�ׯ��i[��L
�!bs�`�N�#М~f�Β�|p�|����.g���Al���'���k��ӟ�]�85�isS��'-,<:�ܹ82�ԕ��K��e�֝cS�Wq&�7�\���Ȥ *���3�(��.i,V����ôX<&�j*9Cf<���z�7o?A�V�C
B�ճ?�|������E�ff2�Mj����z�����W/4���rg~bv�Ι�6��*��PWfo�*MYQ��"� Y�Bv8$k9i��9��
�s�Z�k�,�[Ed�("g����D�( 9Cvf/v�+�TIIġ:DUt$��8���|+�Q��O[w��-O6�}�N���6D>O��5 ��t�(�->��˿�@-P �\�XnkY>q-�&ք���te�[��G#-Q�#���!��Q���e��cWlt�ɲ�A����v��[wE�3ςָ���`��2w����<3{ʞ��J y$����D4��:�|l��,�{��y{S}�6��r��c��zv�)�Tj�rTN
�N75���w�P�ZWYT�8���N'E��f-Ƅ�Ū��g��ʴ�E�#R&̡ �Pp�~���Ͽ~)L�Ͽ�%To�V�OV��fTp�PTpY���C�|@�/n�\���O�qEay�
�7S��'���w���j�'P8X�,):P�oUX%��A�2�"Jj�bQ{(�4� H-a4�_b�KK��B�.�-��\���Kvר���z)�i��d�� {��_�6Z�l�v�����V��㥩�
N_�bV���ڬ\F�2Iq�n0usj�^��t�F�i���JN߮�H�'KR���������*��8�y��,��EE�UDu4����wLT^[/�K-�b˙������Q{} )|��{_srux��ĕ�������Ǒ�&DD������mҷ��$+#3��آ��G���Fm����##��'k���j�s�9y4�K�|.��捸��YoM9��5�E��G��v��R��Р|��^j�7�NA��k��5л��M����E㠯�
�M�g#A�
�c�Xa��$�� H�0����)���J7fL��W�+s����bױ@5a�L�� x��J���ѡ+}�o=��Z���
�g�
�m����H�F"v9�_^�W+��qk[Sfm�_��Y�������c�������?;���#n0I�ĉc���$N��6.��Gy&����@B�Zmm�ģ-]7��i�&��jR��@7:�"(m�*4
�c]7�v�]�J%&�w��I�����棟﹏s�����p{ڥY6���oV����)��ZꕡY�%b�����'��:V��ct彣I��gΛI���)9�+�� ���� :b:�}69}k4
�xJ-^TW[QV����y���L�fn#4]T���eǃJ��_� ��o��tIV)Y��],"�ŕW��ڶ���u�7B�c{C�Ơ�uv����%n�������U.r7x
s�_�����eڪ�ږU�X��N�C�5��V���W�n�W`�WEE�|�K#��"
�`Pnؒ�ʂ^��S5���LYvBh����R8�U��$34��>й�I�v��UO����3S0����\��[�f�t��FD�J���y��J�1���M�2����(C��ŗ���$�}�����6�{�N�����=���3&N��:��SdI���s1�ע�&�d ���q�̀_g)t�B�#߽r��8D���oݹ��IobN���Y�\|"�TrI�'�����I��.,��T��G��b�"�ɛԮ#���d�G��I��'�~����g�:P� [��z��U�R���ÏC� �0��hT� j�e82��,]N��}[���e���+�W���������PK�PX��_�ԴJ@�{{H�F�O�ì�I�>�هCQq������wD�&t�Rj�e�ք2l؏l��L�MK]�p�TYd^��#qű��u��]�F�Xҥ�l�dlE�26G�h��n��h��]v<��0Orr�.����*O�uY���p�"��\��9Á��-��nj+]����j��l*m��x
��v�ΚW�
o\�20ߖ5�hʩ�)o��98�W�^JMЇ4J:�������WF[E���ٜ�j2R�J�`a��aa5�5���u��췱����ק�;|8�xIs��9t�k.�n��L<�l6����Q�N�
�� d���7�$y��S��x��iX�S�j�����1�^|�j)v
7DҰ0/�A�+騞�f5�
� ٓ��o>�̷��_��.{w��l��j1k>�Hב�*�*�Q?�٪ƞ��^�.~��@��d<�#=2������sK��۔.}���O=��W~��8����.�"v�V!%}8���:z�֜R�I����l��7�>+�O��+*����ʒ�6گ��4 ��F�i8I�9���[�}/����Qڌ���i�o.�~���$Q��D�
��_B)�AV^O�Ӱ�B�H��Y!����}�"3����؟x�V718�ȇ�Ӵ��Q:���3`3�p?4S�(a?U��UT1�����8Oav�r�5��eJ��Ud����x�(w3H�U���`3�fY�������y��X�B��uT��=��R�8�g�Q�z�iJ�� =����m%ۈ��۱;�&�a�n�Ce4F%t����0=�_DY{x:� �=ư��M�v։� � �����9ٯ)
,a;!��j!W�7ZˏR�ԍ�$2�1Jc#�%�q �z�z�w��X�
��X�
@̿�B��Er/�������A)t ۖ�p���;X3G3D��n*��lb�����)�si���K�{���:�KK�br*��_����}ZbW�c��v�!��S������l�"i'�h�c�
�m�&I�wx>��̴�݅�ZD�x��T��k��ߑ��?��D��x%�ߨ��)�;�o���=N�_2����ҏ&�������'�K_J��7$���!�~f�Z�����i�l�]��ڗg��AM$�j��C#*�
h�}������ zK��������:HGkH�,�K��,l�D� ���
Sh �JdVsD�4��q^�B/Ӥ�[?��\�!�sx��q^B{_�׀�V������uÊƎ�H�ڮ=�e����2ꧽ���K;h'�''���1D�hk�=ԍ1](��{i;��R���D��E��A������6�������I0r?v��F�� 0@��;��eo��m����R��WYg�r��~�zp+�ۏ�>e�f�2H�Ŭ��,����K����߇=�=�����30'���Iuޱ��3�V��Kk:F�ft���*ڶw�t�W�(K"�`O{5��f�dPk�@��J�>k8'�Ӝ�2F�S�gqq�D�u�xd�Ŏ�� ����h���ѓ�@��c�-�~��{ݶn�C�N/��e+�ҼgX�Ј��QN��i�e�D�)� ⸝�
endstream
endobj
23 0 obj
<>stream
H�\PMo� ��+|l4in��iR�в� N�� "�?C�N�%���x�_�����#z�c��:q�k�Nֱs��t�ʭg'r�- ���-�Oj.)npx2~�#���`�n����?��~pF�@��`p$�W�Ԍ����۴���7��������4F�&d���оPH����7;k�ME���
A�p��:�ˎ/7;ndV�D�(e��N~���O�5F�_vV�g���c�� V>�W� %#x�
endstream
endobj
24 0 obj
<>
endobj
25 0 obj
<>stream
H�:�&�`ؠ"��)����� �J��Q`�PbHh``0|����� �� ���
endstream
endobj
26 0 obj
<>stream
H�\��j�0��~
�Cq�օB�����e{��V���'=��'E�������$]V��w��8�'h;�"��=Z�+�:��\g��n�۾ J���� �ʷ��s���8���+��~�c�o��*�5�����'H�(�aK�4���"�T���4oH�w�sٲO�1vp8��bl�
U��* ��*z�/��Evm�wU���$���]�0��y'�c6� |`.�K����,|f���r#��}M&�1��a_#��}�^x�,���G%\*M�}�����ؖ�q�:��Ɇ! ��S� �Ԓ�
endstream
endobj
27 0 obj
<>
endobj
28 0 obj
<>
endobj
29 0 obj
<>stream
H��Wkp�>��K�
�Z�e,[�� �F��-�6��C~��_1���$8� !-q(�� 4�Lۘ�ҡ3����������5Lۙ��
mlJ�Nf��I��$S����`���ѱW����;���9w �`8���*�e}��_�]���#��t�ك�Ѐ, ��������
��ߏOOD��ޟ�| ����S㧬?<D����&�"�K�N��2�V1I����|�C����oͤ�����S{G"���ܤ{ ��h��4gM�P@��D��}@� ���{��3~�B0���76}�=*��]}4X�f'A�n"��p��O� �O�8Y�~Y�ת�ڇ�/aN�[��Y���m$D ���k�ό�� ��b;@��H�,��5�~�ߦy5���9�f�A��Kc�3@/_f��˸�;�bE���j�Gh�<'�G��0��6�y���j�#����H[��l���NɒY(�=�@yE�ߗ��<�-�32����~Q�
;���}z�Ա�I������[ےv�=�20�'zn��Hu�W�!�oWG��6����%&���9Y��l�1�*X�ٞ!j^"��S
Z����_ܼip�Ϸwp����bm�G����ͼ�x��'J�"��xFk��������+�J���dqe_�|5>�[�W1�b�w���#tx?��^�p�2��s�x����T�&��#�����l3+hn^|��{)�Մ(@��c&H��2��(Y�-Y��_o7��R�
ӽ�%ES�D/L�p�(��v4g6C6Lije�HI���C���FYɜ�ɾ��g���O�X�;ضXϲ���ſC1(�`�|]�p�� �sw��`%\���G�RC��k�z�?�To����~��C��+Sɗ�z�=#
4_���4�=�);ɉ��*��D���g��4$�8R[{d�y욘��O8%�ǎ3'N�1�)��Ӛ�jv(�ב^�8�0J�t����ݖ�ʐͣ���Kܚ&�����n�xY�x�O�45��f��p����Hk(&��ėG|��~�̊��U2+}a�U�[�Y��i4:Uu��)��+�4n%���v�om~q��s3���]ϰX������o0/~���B���:��M�'z/���͚&z��"�n�W����vح��b,^�^f�I��X�|�7d:\̐�-�j�Îֆ��m��}��F6��B%[�p�����dJ��P�Ro`}f�h���dC_UNE~�D+�ffڃU}~ʿ�LOtIUw1���4F�q\.gX�l��եZC��16/�2*�I5�!T]w�|��D��J���>�x����¯���Do�#��D�tE"��X���nਲ D���|y�6��[������'f%zt����Шj�#G; ��-��C�|.o�;���l�Dj�Y�]�a�N��I��e�徭�]���1}b_��t��zN}�qs�:O�H������U�J|5���(�v�Ջ��Nھ�'�pTl���Qz��,R$i�,�e��y9�5���:a�,�~�0��/�d�p��,*4D�����Y�V�)P��6l�YeDo�Y�U�W=��ߵ5��Z[\�V��j˫z}e�6�w���=M�v��hgAZ�$z�饫hO�2F��`2�qYՔd�$S�HW5���ή���e�l��<::���s����/<�w�2�$��v��W�In�����K���3k�3jL�!�.�X̄�y� �ԋ����-��w����!'�d��˩�����q�J�h;���:'5M��k�����Xّ���kGGy�D�*�R1X]�5������sm�Z㩎6�?Yo<�z�_�w�
3��
M+�)|�n�~P0g"�"�"�~T��+c�'��ܯy��'� 0�z�)������Ѯ�:T)=eE ����Hn��(~ ��E�t?� ��s.�K'ů��������P�P���V����<�9�8��z�#���z��/b&��D���,�ձa��@c/���5���������3^g���]c?�}�/v徳oGlFkы��'���=�s�Gk����\8����n��U�U�w��@B� C�N_&�̛I(KS��dJc�@}�J}���-M�@�v��T+���q߭�k�+Z��z��+�k��O���o&4��9:sf��|����}���5|��1p���_�UX�nl�˱Wb.v��ڃ0� �)��S�i�Gpw�n7�xމw�#�(��'�>�/�x��w��/�+��_EF� � Z}@�Af�}�!��Ip�+ĄxL���I< ��U<��%"�b�X$L!VU��J�x간n�YI,C
&��|����ɊW`�p
<p#���7�܁��v���Q�Cx�{�>�o�3�<>���Y|N8�"~��G�1����ϼ$oeT���m���A=��Ž��߈f<�j�1,ă� ����"��x;�xC%�,LJ}D;��h���w3"?���8���(��$���ᓸG�O�r|6>���_B_c��ul�ql�7y~��-\��a;��]8��������+���q~���'���7��q ��!���^��33"O���[ģ�6q��S�+�w�[�#�w����õ�5��o��؇�0�e?Wvи{�<{��.c�n����AK5%S��2�&�G�R���u�Nm��#3�*���JT\��E;G�F+#|��p�����Fc�W'��O��8��X-�h֮u�52w�|�4�,(}�)��}]U���C'�����s��O�����S��g���)�Ra��u��;��-}*6��Q+�j����m�
%�����ac(�H(�
��3I��=+�����3�B��0i6�ȑpc,[5���,?�h[~2l�����T�6���N�B!n���zt��IX�%��5� 8g(�Qw�Y��#�
�T����T�m��rD�cyi�>�TT��ᬊe��
'݁�2Nj�T�<�6�[u�V��J��M��r�ܓ�S�wj�cyG�bA����qD�*�Z�[����n�h�y���Ч������Z�W8Z�a�q4�!�)�J�\���+G��a�i5%%�$ԩ��;�°�â��K�X 7��#~Ő%��V7^_����V��'mY�E��wF�Dh�����1܄+ՆA�sq�KY���0U���d�
P�îat�*���N%�)��hO�JSjik�VCR�6x�f�zi�����Y�=q�q�3i^i��6U�d�h�Q��Q2N!���i�BO��Y��V�B�զ/�1��T�C8�S�F�m��7�P(�F
=iUk�UJ5߾Bo�-�juo����^uܨ6 E�a���lO=��[Z՛}[?2�㶪�Q�`Z5�}[����`<��`���Qoos��z[����R:��Z�?_���O�f�"��;����ZEZ��ֶ'.�j�K�z #Y���$G�s�i�YL�
��O� X����nuT�aɬ���Ut9KzO��
�7
�,K#��9Q����������:6�Ҫ�����M?��"ӏh7����M?���_��ӟ���L�RӔiL�b�6dF�:Z�ʜ6�|f��dz�dۙ��Ҥ4��fՓJ=URU�9]�����BꧩA�4m�~�&���m�OӋ���K���˨����Oӌ)���0yl�'��g&ef��v��#�:��
99�5�B���y9�Z���L��IJ����v?*�����r�4xf�Yi�U�䫸[�'��3��E������Yot�+E��u5�疟�R�J�K�LKwZu�'V:�0�/��М���)��n*sF�9���Ǭ��K��F"����Y��-�t���y��\;5Z�Rv��ڙl2S�O�k�[*O�
[�#��#���"�ҙ��I�V����g����]�V��ވ���T9�q�=���^S�h��F/ml��^}cU��)�����c�1�t��v�Z��"��r&}�,:B���Ѷ2F7aZwfJU�FN�����Z��
[���慮�/J-W�*�do��ڥd�sy{�Z�v�˦IbO�����*O�x�GF�ثZl'�*��)�g���3f�s�=�
�TkR�;��TkSEʦ}�J��J�fT'Wd�����/�@�J�k5>F^i�^ӯ�]3��t���ʋ�N:�uLU��%���1�IM�r9{kS ��KY�3l"M���7��U��9g���H+��6�^��i1h@�@{v[L��Z�v˶�E�jUT� e�w#X�&�7c�ݚPMM1��f>���|2���3�uٴ��ͷ�Ϝ�̙3W�F0�sj���EқQ���#7Bow�&;؊
hn#�dk�8��r�N'!D#D+�E��IX*�
B��gD;=��C�A�g���0��T�3e�g�t�^�,�}�)��>���O�.>�"�gRt��L�(=OC��C�K�Az(�W�
AR��:��EuDՋꨪ��^T�T��^T��:�6����}*&� _Ҳ�]�\�Nb�5�SZ���X��
2o�/�US9��d�״��u�cohI�i-ix�-�彥b������ђ�w���Ւ�����}x�ΗwVŔ�-i?�%��i絤!�%
:��jg+3| ��i�� �`�[%3{����?�XW��?���\��_z����=��|lFDe)��hey�� Ǯ��Uy5�F��UV�c�KH$�%(�9eυe���V'�Bn+�]�� ���}e��3����b0��%��>�JyA�X���H�Z��ᏹ�1?�L()S
��U�3"G�Z��hs²x x�`eT�A��,�/��V��_��~6���<4M>( >P�0}��b�
�~��r��������Dݠ$
K�*�L��������P�ʮ�Wd{�b�oC�
gƻ���Ǟ�\C4�7}!�珹�B�a�ԞP%�\�0�A��
.�/ ӗ����^P��:]"]�Ͱ4�0I.X6-@�]��!Sn����Q� kAc�a1|)��#�x�ؽDVZR�3�z�Xi�{��[��s{��fp�м������DŽk���hOc��˶hMc�mN8�V, �]�"L�~��1�8=��{E�N���;=������}a�����Kp ��#i� .���F�mji��`J�j�L��|���8�/���znaxݤ�u�~2u����GMxä�Z�=~ۼC���O��k�/v�w�͓�R�v�M�oڸȔ�y5h��~��u���Y~� ;l�q��c�a>&�K��j���`%�I�&}�7���ope\�c�I�t��p�D8�U��Tl�J�&�X�*�~�1r]��Zx�O]�?���v��<-~nc��@���b����X��+oA�*�'K�j�Ҍն�e�3Y���Y+��q[��������� N��E
endstream
endobj
30 0 obj
<>stream
H�\�ˊ�@��>E-����@��b.Lz�h%#LT����OY_�
0�չ��<�թ>��*�ԝ�*���/�1},�s� �E?t�'����9m���X��4^�`��/k|��S���t1�A�c��2�7��:����1�����(Do�6ѷv��ލ]�۩��a}�٘/���lD�8��n��cn;����{i�������g�4a�k��]�{dݥL��R$cGJ�g�I�uRq��I"���;(�J(�*(+\ݚ�1�@xFOׁ�� M!��������S�hI�h�g�!4��K|\�rz�kl6*$��s�XF�L�,�3G��LS�� n�6�<!��u�@�eL"Cm�<��9}��]�]cS�d�E�\AT�Q���Ӌ����ȹ�k�nC��F�����Yب� � 0 ̶
�
endstream
endobj
31 0 obj
<>
endobj
32 0 obj
<>
endobj
33 0 obj
<>
endobj
34 0 obj
[32 0 R]
endobj
35 0 obj
<>
endobj
36 0 obj
<>
endobj
37 0 obj
<>/Font<>/ProcSet[/PDF/Text]>>/Subtype/Form>>stream
H�DO�j�0��+t�v��n(��5c�4[#=C�a,���OZh��{���$]:�F�y)h=�g�:��|�E�a1���4��:�N�^����O�Y�1��\u{P�E�ڧnN��-���gȪqz�Oi��*k1�O���כ�aC`tX���r���N=�L���K���~$q�7f��[��m�i��������z��#>�����U���{z�H��+� ��EY
endstream
endobj
38 0 obj
<>
endobj
39 0 obj
<>
endobj
40 0 obj
<>stream
H�\�ώ� ��<���Qi����m�'���8vMV$h��2L�M��f��dH��P�~�ɻ�f����0�7o�_��[� ��f�����cIH>/�Cm��O>��4��?=��V,y�-��^��Wu^��|s��3OyY�� �Ҹ�f �Ĵu݆�~^�!�/�sq�E�gdƌ-L�1�{V�a��8�Q2���s�Sڥ3ߍg���4
+�Y��o���[dM��+�
�@|,IS��$�:�t$�(��������+�W1^Kd�Ѓ��s�yO�G&�
}�#��D.���A�M4z��A���2��X+�i�rA��˿�2�!��豹y��T�+v���xunt>
endobj
42 0 obj
<>
endobj
43 0 obj
<>stream
h��Y]o��+|L�e�!@�����#XΓ���nOgd�5�+���S�đ i��@�@�4=;,V7���!'TcM�֙��w1��Mj��I�"�T��s�f�A)��b\���j|r�Z�D,��6��(C6�S�bR0��j�%[R_�%eJ*N��TJ�$$ؚL� ��dG�����jr���M�&'��LΔ���hW&_��B�H�
�"�*䋴�АhW!_�R5d�UȗiW!_���+��|��T�U_L�����6/Zk*]�75P�`*]稿6��9ͅ�Nl����J�r��o̴
L�V��H�l��@���юqB��Isui��k��@o��;rL �i)@f��M6} �9��̹Ed�4 s���Сpd����O�!�c,�cl s���WA'e82��6
��J{�W�L���k�Y������o���3�7dfyC�@����
�cn7dNm�PN�u�;�D�@�B�!��4]m�T^�p�[S�
#���������8�ЦNh�
�;�!x�j��v�����_�r����m[P��t��\>�7����>y����vsk���n��z� ����fuv��\N������ZN,/�>]o���t�ہ+�Ȯ��z�����:qw�a��{��n������ 4������������/���ɯ�;�rvX���sb�O���0u�b��~�����o���;��}#��%��w��?l���?�N���g�?�.o>��~���mɕ�fJ��Ƶ�n|˰�&�˛������v����n�y��LZ_��_~;;ܲ��������%����\�lu�~��7ۭ�vq)I�e���n6W�:뺭�U<�� � � � � � � � � �(�8��������W?//���������������s��C��g;��I�k�ۊ�J��~;��o�o��C���g�ć�`�_.���G%If�"٣�j%!�Ԯ�E����K*;d�d=��s��ҟ�?K_�V+�Z��
�Vh�BY �Hҫ,�$�J.PrA_���V1���U�8�%=�2��b|��/���$
%i(ICI*
PQ��T�� �(@E*
P�>��\z����%(�CEJ�P���40��^;K�S�?��ѓ��ry���\T�T�T�T�T�T�T�T�T�T�T�T�T
UUSURUQUPUOUNU�U�U�U�U�U��^��������O����3�=p������(p�q=���F�q���~� X�Kv?
���h�'�����#7Я���捵���[o��q�@?
��8
��̣�2
���dG��Q��Q`�F&�ȸe`��_�ȄE`n�)�K8c�p�k�z{�`�cb��N0�QʹrӜz�U2�z����2k҂ٮv "Z��j�T{���u��V��ؙ(�D:|鹇��_�~?O��E#�t�����G|]�����ogH���妝��^ow��<>�#�7+�F�gG��tu��#�����?�|�*4Vh��hQ�NY�n+w[<�O� a���W=�At��@=�=�ǫ�%� �`�E�&б tl�~l����ѡ�
}~���
���Ƥ0/M�a��1��fH�OӁm3���.������sٞ�[�� ��h ���m���gn�{���ji9�$� ��� 0�Q F�n���@?
��8
L����t=�nXF�u��(cw8��aG�i���.�(0��(��� ��Q F�n��6)j]�5kY�a�p�k�2��}'�2v�]��G�q�0��܍��5�
t2�����m�R�G�����ss���������o=����S�y^���
�� �i
endstream
endobj
44 0 obj
<>stream
hޜ��j\G�_e����4���ZS���b ��&��@���5G-�Es��8H����4gϺ�,Tj������Xq�"jaj�����a���y�E Ѡ�kQ�KsΤ4�i�k�g��K_�(}q��:����D�2٣�i���$�.i��ݻ�������\�=A�R=A���@�����.ʺ`������w�������n�vs��;�D��}b�DИ�j�-̎��9
�S����p�p� ��y2q���5�n�`[�A���z���@y.Zs��9�BC]���E��U�8\�^pC�`C�`Y0,��0~+�0CYf(7�qȤ9�P�C�e[0�W��� V/C�f-cx�ɞ@y�'���E���d������d��蚖� ��p��0���2���BYe۳�m�Pn��~������q,�p�g ��/���"+�?�<�����,ز�e��G�9pl�m�In�II�&9Nr��4ɵ$gI.�����L���Y0���uܾƉs�eO�o��n]Jru��y{�t|����������Ծ���fN�$����ok������.�I�%9Kr=�٪+s��$ד�Hr3�uJr5��w�����ӏ����3�x]���Û����N�=ݼ;�^�^�����ol�L�u��l!�Ӈ�?��wǗ����������)���on���$�Fl-b�8"�(J�PTñFq���5�k�(�Q\�x���2�=�?G��9r�0�a�à�A��%�hXBOBOBOBOBo;.6�ry�5
�(\��F8a�C�.7D11��QL�bb�0Na��x
�5��Ы�WC��KZOr�'����'����8�I��$ג�%���F��9�(�%�a�}Xr�؇h[�����>�.ג�%���F�����E�7�9��x��w���f����������3����O��E�D���/^���.R��9�I��$ג�%���$�<I.O)�%���}���m.��W�9�]Β\Or#��ǔ�j��q0 �9��
endstream
endobj
45 0 obj
<>stream
hޤT]k�0�+�*��V�`pK�A˝����S�4�&Ÿp���+%���(�A�m�jfG��K��x
N !
@#h}bt`P��<X$��=�M� �����4��烅���=� ��w�Xx��)Y0�Qs�3���a[�-�/r��X,ԕꍺc� �<��[
k��R��q�u�x����x��ܗ��~~�w�X^/�Z�;�҉�-~C"�I�n��7��i��<�ƫ4{��O�z��]��7�Ǚ���ꗺ_�/�����f�SWy7���
op0�R?��Ӹ�i\O���j~��OWw�V/�I(r���b���#狧��b�+���:G������?o�>�J��@F?>;|K�
�͛>7����}�8q����^�����]�;�֠%�-���C8$Z��
Ծ��E���Q�L-��$UI��5W�*IU2V�X���le��JDv<��K|[� ��$
endstream
endobj
46 0 obj
<>
endobj
47 0 obj
<>stream
2023-03-23T18:12:44-07:00
2023-03-23T18:12:45-07:00
2023-03-23T18:12:45-07:00
Adobe InDesign 18.1 (Macintosh)
uuid:be570cc4-60ec-f743-af4f-2059e75072db
xmp.did:09482ce4-1670-4781-b03f-4550d85900bd
xmp.id:a39c16f4-f0db-4c86-910a-0b633a6d01e1
proof:pdf
xmp.iid:e00c0ca6-6980-4838-987b-3f2013396550
xmp.did:b11bf57e-0d29-41f9-bcaf-3020ae7e5171
xmp.did:09482ce4-1670-4781-b03f-4550d85900bd
default
converted
from application/x-indesign to application/pdf
Adobe InDesign 18.1 (Macintosh)
/
2023-03-23T18:12:44-07:00
application/pdf
Adobe PDF Library 17.0
False
endstream
endobj
48 0 obj
<>
endobj
xref
0 275
0000000000 65535 f
0000049963 00000 n
0000051614 00000 n
0000058108 00000 n
0000059825 00000 n
0000065024 00000 n
0000067146 00000 n
0000071792 00000 n
0000073540 00000 n
0000076418 00000 n
0000076486 00000 n
0000076592 00000 n
0000082063 00000 n
0000082332 00000 n
0000082641 00000 n
0000082666 00000 n
0000083069 00000 n
0000083628 00000 n
0000083894 00000 n
0000115291 00000 n
0000117440 00000 n
0000125720 00000 n
0000134841 00000 n
0000141810 00000 n
0000142130 00000 n
0000142381 00000 n
0000142503 00000 n
0000142873 00000 n
0000143335 00000 n
0000143404 00000 n
0000149381 00000 n
0000149949 00000 n
0000150486 00000 n
0000151022 00000 n
0000151275 00000 n
0000151300 00000 n
0000151337 00000 n
0000151529 00000 n
0000151975 00000 n
0000152225 00000 n
0000152361 00000 n
0000152795 00000 n
0000153119 00000 n
0000153254 00000 n
0000155190 00000 n
0000156323 00000 n
0000156902 00000 n
0000156980 00000 n
0000159540 00000 n
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
0000000000 65535 f
trailer
<]>>
startxref
116
%%EOF
|
https://www.databricks.com/dataaisummit/speaker/natalia-demidova/#
|
Natalia Demidova - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingNatalia DemidovaDirector-Solution Principal, Data Science and Artificial Intelligence, North America at Hitachi Solutions America Ltd.Back to speakersNatalia has a Ph.D. in Mechanical Engineering. She is serving as a Director – Solution Principal in Data, Data Science and AI with Hitachi Solutions. Natalia led the design of the Real-time AI platforms with MLOps and LLMOps, and Machine Learning pipelines in alignment with Client’s strategic business goals and KPIs. Some examples of Natalia's solutions with Azure Databricks involve the Intelligent Knowledge Management platform, AEC Construction Cloud with AI, IoT-based predictive maintenance.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/company/partners/consulting-and-si/partner-solutions/lovelytics-health
|
Lovelytics Health | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWBrickbuilder SolutionHealth Data Interoperability by LovelyticsIndustry-specific solution developed by Lovelytics and powered by the Databricks Lakehouse Platform
Get startedQuick and meaningful analytics for health dataThe healthcare industry has a legacy of highly structured data models and complex analytics pipelines for a variety of use cases such as clinical trial analytics, therapeutics, operational reporting, and governance and compliance. The Lovelytics Health Data Interoperability accelerator automates the ingestion of streaming FHIR bundles into the lakehouse for downstream patient analytics at scale. Now you can:Democratize technology and analytics to prototype health data dashboards quickerSimplify the exchange of health data models and reuse data assets for a variety of new use casesEstablish the right foundation for your analytics roadmapGet startedProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/fr/solutions/industries/healthcare-and-life-sciences
|
Lakehouse pour la santé et les sciences de la vie - DatabricksSkip to main contentPlateformeThe Databricks Lakehouse PlatformDelta LakeGouvernance des donnéesData EngineeringStreaming de donnéesEntreposage des donnéesPartage de donnéesMachine LearningData ScienceTarifsMarketplaceOpen source techCentre sécurité et confianceWEBINAIRE mai 18 / 8 AM PT
Au revoir, entrepôt de données. Bonjour, Lakehouse.
Assistez pour comprendre comment un data lakehouse s’intègre dans votre pile de données moderne.
Inscrivez-vous maintenantSolutionsSolutions par secteurServices financiersSanté et sciences du vivantProduction industrielleCommunications, médias et divertissementSecteur publicVente au détailDécouvrez tous les secteurs d'activitéSolutions par cas d'utilisationSolution AcceleratorsServices professionnelsEntreprises digital-nativesMigration des plateformes de données9 mai | 8h PT
Découvrez le Lakehouse pour la fabrication
Découvrez comment Corning prend des décisions critiques qui minimisent les inspections manuelles, réduisent les coûts d’expédition et augmentent la satisfaction des clients.Inscrivez-vous dès aujourd’huiApprendreDocumentationFORMATION ET CERTIFICATIONDémosRessourcesCommunauté en ligneUniversity AllianceÉvénementsSommet Data + IABlogLabosBeacons26-29 juin 2023
Assistez en personne ou connectez-vous pour le livestream du keynoteS'inscrireClientsPartenairesPartenaires cloudAWSAzureGoogle CloudContact partenairesPartenaires technologiques et de donnéesProgramme partenaires technologiquesProgramme Partenaire de donnéesBuilt on Databricks Partner ProgramPartenaires consulting et ISProgramme Partenaire C&SISolutions partenairesConnectez-vous en quelques clics à des solutions partenaires validées.En savoir plusEntrepriseOffres d'emploi chez DatabricksNotre équipeConseil d'administrationBlog de l'entreprisePresseDatabricks VenturesPrix et distinctionsNous contacterDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutiveObtenir le rapportEssayer DatabricksRegarder les démosNous contacterLoginJUNE 26-29REGISTER NOWLakehouse pour la santé et les sciences de la vieOffrir de meilleurs résultats aux patients grâce aux données et à l'IADémarrerPlanifier une démoQuatre défis liés aux données dans le domaine de la santé et des sciences de la vieFragmentation des données des patients
Les silos de données et la prise en charge limitée des données non structurées empêchent les organisations de comprendre le parcours du patient.Croissance rapide des données de santé
Les anciennes architectures de données on-premise sont complexes à gérer et coûteuses à faire évoluer pour répondre aux volumes massifs de données de santé actuelles, dus notamment à la croissance de l'imagerie et de la génomique.Soins et opérations en temps réel
Les data warehouses et les outils disjoints empêchent de fournir en temps réel les insights nécessaires à la prise de décisions cruciales en matière de soins, ainsi qu'à la fabrication et à l'administration en toute sécurité de produits thérapeutiques importants.Complexité de l'analytique avancée de la santé
Les capacités légères de ML empêchent les organisations de s'attaquer à un large spectre, des modèles de soins aux patients de nouvelle génération à l'analytique prédictive pour la R&D de médicaments.Lakehouse pour la santé et les sciences de la vieUne plateforme unifiée d'IA et de données
Une plateforme unique qui rassemble toutes vos charges de travail de données et d'analytique afin de permettre de profondes innovations pour les soins aux patients et la R&D médicamenteuse.
Solutions partenaires
Les principaux fournisseurs mondiaux de solutions pour le secteur de la santé et des sciences de la vie, tels que Deloitte, Accenture et ZS Associates, développent le lakehouse. Profitez d'offres prédéfinies qui accélèrent la transformation data-driven dans le domaine de la R&D médicamenteuse et des soins aux patients.
Des outils pour accélérer
Databricks et ses partenaires ont mis au point une série d'accélérateurs de solutions qui facilitent l'intégration des données relatives aux soins de santé, telles que les messages HL7, et la réalisation de cas d'usage tels que l'analytique de textes médicaux et le monitoring de la sécurité des médicaments.
Collaboration dans le secteur
Activer le partage de données sécurisées et ouvertes et la collaboration avec des organisations de l'ensemble de l'écosystème de la santé, afin d'accélérer la recherche pour sauver des vies et améliorer la prestation de soins.
Proposer des soins d'avenir avec Lakehouse
« Le Lakehouse de Databricks pour les soins de santé et les sciences de la vie fournit à GE Healthcare une plateforme moderne, ouverte et collaborative pour élaborer une vision des patients à travers leur parcours de soins. Grâce à ces fonctionnalités, nous avons réduit les coûteux silos de données héritées et équipé nos équipes d'insights précis et opportuns. »— Joji George, directeur de la technologie, LCS Digital, GE HealthcarePourquoi un lakehouse pour la santé et les sciences de la vie ?
Accélérer la recherche et améliorer les résultats des patients sur une plateforme ouverte et collaborative pour les données et l'IAVue à 360° du patient
Regroupez toutes vos données structurées et non structurées (patient, R&D et opérations) sur une plateforme unique dédiée à l'analytique et à l'IA. Grâce à une vision globale du parcours du patient, les organisations peuvent proposer des traitements plus personnalisés.Une évolutivité infinie pour les études au niveau de la population
Analysez rapidement et de manière fiable les données de millions de patients grâce à une plateforme évolutive dans le cloud. Grâce à ces insights à l'échelle de la population, les organisations bénéficient d'une vision plus complète des tendances en matière de santé, ce qui leur permet de mettre au point de meilleures thérapies.Analytique en temps réel, opérations en temps réel
Intégrez et traitez rapidement des données en streaming, où qu'elles se trouvent, pour alimenter l'analytique en temps réel, avec des applications allant de la gestion de la capacité des lits d'hôpitaux à l'optimisation de la fabrication et de la distribution de produits pharmaceutiques.Découverte de médicaments et soins aux patients grâce au ML
Libérez tout le potentiel du machine learning pour mieux comprendre les maladies et prévoir les besoins en termes de santé. Toutes vos données sont connectées de manière transparente avec une suite complète d'outils collaboratifs pour de l'analytique avancé.Télécharger l'ebookPartenaires et solutionsDémarrez avec une variété de solutions et de modèles de données et d'analytique spécifiques aux soins de santé et aux sciences de la vie.Santé connectéeAméliorez l'expérience des patients, ainsi que vos performances, en personnalisant le parcours de soins et l'accès aux services.DémarrerPrecisionView™Élargir les capacités, augmenter les moyens et enrichir la collaboration interne dans le secteur de la santé et des sciences de la vie.DémarrerInteropérabilité des données de santéAutomatisez l'intégration de bundles FHIR en streaming dans votre lakehouse pour analyser les données patients en aval à grande échelle.DémarrerDémarrerGestion intelligente des données pour la recherche biomédicaleTransformez les données scientifiques en un actif pour votre entreprise afin de créer une chaîne de valeur de bout en bout.DémarrerDécouvrez toutes les solutions partenairesModèles de données et mise en place de cohortes : concordance OMOP et score de propensionIntégrez et standardisez facilement des données du monde réel dans votre lakehouse pour une analyse d'observation à l'échelle.DémarrerInteropérabilité : ingérez des messages HL7V2Automatisez l'ingestion de messages HL7V2 en streaming dans votre lakehouse pour l'analytique en temps réel.DémarrerInteropérabilité : ingérer des bundles FHIRAutomatisez l'ingestion de bundles FHIR en streaming dans votre lakehouse pour l'analytique des patients en avalDémarrerImage : classification de pathologie digitaleAugmentez les workflows de diagnostic grâce au deep learning en détectant les métastases sur les diapositives de pathologie digitales.DémarrerR&D : identification des cibles médicamenteusesAnalysez les associations génétiques à grande échelle pour aider les équipes de R&D à identifier de nouvelles cibles médicamenteuses.DémarrerSanté de la population : prédiction du risque de maladieCréez des modèles prédictifs des risques de maladie afin d'améliorer les programmes de gestion des soins.DémarrerNLP : détecter les événements indésirablesAméliorez le monitoring de la sécurité des médicaments en détectant les effets indésirables grâce à notre solution NLP développée en commun avec John Snow Labs.DémarrerNLP : extraire des données réelles d'oncologieTransformez les notes oncologiques non structurées en insights inédits sur les patients à l'aide de notre solution NLP développée en commun avec John Snow Labs.DémarrerNLP : automatiser la suppression des PHIAutomatisez la suppression des informations sensibles des patients par les données textuelles grâce à notre solution NLP développée en commun avec John Snow LabsDémarrerVoir toutes les solutionsLakehouse pour la santé et les sciences de la vie en actionPour en savoir plus,Santé
Offrir des soins centrés sur le patient grâce aux données et à l'IASciences de la vie
Apporter de nouveaux traitements aux patients qui en ont besoin grâce à l'analytique de donn ées et à l'IAAméliorer les résultats de santé grâce aux données et à l'IATélécharger l'ebook →Ressources
Toutes les ressources dont vous avez besoin. Réunies au même endroit.
Explorez la bibliothèque de ressources : vous y trouverez des ebooks et des vidéos sur les données et l'IA pour le secteur de la santé et des sciences de la vie.
Explorer les ressourcesEbooksAméliorer les résultats avec un lakehouse pour la santé et les sciences de la vieDécouvrez de nouveaux insights patients grâce au Natural Language Processing à grande échelleLakehouse : concrétiser le potentiel des données réellesFiche solution : le lakehouse pour la santé et les sciences de la vieWebinairesAtelier : FHIR et analytique patient en temps réel avec un lakehouseWebinaire : le système d'information régional de Chesapeake, un moteur d'innovations santé pour les patientsAtelier : standardiser des données avec OMOP et prédire le risque de maladie avec le MLAtelier : extraire des données réelles avec le NLPAtelier : accélérer la R&D avec des données réelles et l'IABlogsAmgen accélère le développement et la commercialisation des médicaments avec un lakehouse dédié à la santé et aux sciences de la vieLe lakehouse pour la santé et les sciences de la vieAmélioration de la sécurité des médicaments grâce à la détection des événements indésirables via le NLPLa boîte à outils génomiques en open source de Databricks surpasse les principaux procédés.Extraction des informations oncologiques à partir de données cliniques réelles avec le NLPPrêt à vous lancer ?Nous serions ravis de connaître vos objectifs commerciaux. Notre équipe de services fera tout son possible pour vous aider à réussir.ESSAYER GRATUITEMENT DATABRICKSPlanifier une démoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterD écouvrez les offres d'emploi
chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie
|
https://www.databricks.com/glossary/pyspark
|
What is PySpark?PlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWPySparkAll>PySparkTry Databricks for freeGet StartedWhat is PySpark?Apache Spark is written in Scala programming language. PySpark has been released in order to support the collaboration of Apache Spark and Python, it actually is a Python API for Spark. In addition, PySpark, helps you interface with Resilient Distributed Datasets (RDDs) in Apache Spark and Python programming language. This has been achieved by taking advantage of the Py4j library.Py4J is a popular library which is integrated within PySpark and allows python to dynamically interface with JVM objects. PySpark features quite a few libraries for writing efficient programs. Furthermore, there are various external libraries that are also compatible. Here are some of them:PySparkSQLA PySpark library to apply SQL-like analysis on a huge amount of structured or semi-structured data. We can also use SQL queries with PySparkSQL. It can also be connected to Apache Hive. HiveQL can be also be applied. PySparkSQL is a wrapper over the PySpark core. PySparkSQL introduced the DataFrame, a tabular representation of structured data that is similar to that of a table from a relational database management system.MLlibMLlib is a wrapper over the PySpark and it is Spark’s machine learning (ML) library. This library uses the data parallelism technique to store and work with data. The machine-learning API provided by the MLlib library is quite easy to use. MLlib supports many machine-learning algorithms for classification, regression, clustering, collaborative filtering, dimensionality reduction, and underlying optimization primitives.GraphFramesThe GraphFrames is a purpose graph processing library that provides a set of APIs for performing graph analysis efficiently, using the PySpark core and PySparkSQL. It is optimized for fast distributed computing. Advantages of using PySpark: • Python is very easy to learn and implement. • It provides simple and comprehensive API. • With Python, the readability of code, maintenance, and familiarity is far better. • It features various options for data visualization, which is difficult using Scala or Java. Additional ResourcesGetting Started with Python on Apache SparkGetting The Best Performance With PySparkFrom Python to PySpark and Back Again – Unifying Single-host and Distributed Deep Learning with MaggyDemocratizing PySpark for Mobile Game PublishingBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/dawn-song
|
Dawn Song - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingDawn SongProfessor, Department of Electrical Engineering and Computer Science at UC BerkeleyBack to speakersDawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley and co-Director of Berkeley Center on Responsible Decentralized Intelligence. Her research interest lies in AI and deep learning, security and privacy, and blockchain. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, ACM SIGSAC Outstanding Innovation Award, and numerous Test-of-Time Awards and Best Paper Awards from top conferences in Computer Security and Deep Learning. She is an ACM Fellow and an IEEE Fellow. She is ranked the most cited scholar in computer security (AMiner Award). She obtained her Ph.D. degree from UC Berkeley. She is also a serial entrepreneur and has been named on the Female Founder 100 List by Inc. and Wired25 List of Innovators.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/michael-armbrust
|
Michael Armbrust - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMichael ArmbrustDistinguished Engineer at DatabricksBack to speakersMichael Armbrust is committer and PMC member of Apache Spark™ and the original creator of Spark SQL. He currently leads the Delta Live Tables team at Databricks and is the original creator of Spark SQL, Structured Streaming and Delta. He received his PhD from UC Berkeley in 2013, and was advised by Michael Franklin, David Patterson, and Armando Fox. His thesis focused on building systems that allow developers to rapidly build scalable interactive applications, and specifically defined the notion of scale independence. His interests broadly include distributed systems, large-scale structured storage and query optimization.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/will-girten
|
Will Girten - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingWill GirtenSr. Specialist Solutions Architect at DatabricksBack to speakersAs a Lead SSA at Databricks and author of the Node.js connector for Delta Sharing, Will is driven by his passion for open-source technologies and creating a more connected, data-driven world. He has over a decade of experience in big data, data warehousing, and performance optimizations. Will is committed to pushing the limits of what's possible and delivering transformative results. Contributing to the Delta Sharing project is just one way he's working to make a difference.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/lorenzo-de-tomasi
|
Lorenzo De Tomasi - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLorenzo De TomasiData Architect, Data Platforms Lead at Barilla at Barilla G. e R. Fratelli S.p.A.Back to speakersLorenzo De Tomasi is a skilled AI and Data Engineering manager. Holding a Computer Science Engineering degree, he worked as a Data Scientist at Luxottica Group, developing computer vision algorithms for quality processes. Now, he leads Advanced Analytics technology at Barilla Group, implementing complex Machine Learning, Deep Learning, and Advanced Analytics solutions in Marketing, Finance, RDQ, Sales, and more.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/kylie-taylor/#
|
Kylie Taylor - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingKylie TaylorData Scientist at Mars PetcareBack to speakersKylie Taylor is a Data Scientist at Mars Petcare. Her current work focuses on deploying machine learning models at scale and modeling the impacts of price inflation on shopper behavior. She holds a Masters in Economics with a focus in Statistics from the University of Texas at Austin. Previous to her role as a Data Scientist, she served as a Senior Statistician in the Global R&D division of Mars Petcare.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/reynold-xin/#
|
Reynold Xin - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingReynold XinCo-founder and Chief Architect at DatabricksBack to speakersReynold is an Apache Spark™ PMC member and the top contributor to the project. He initiated and led efforts such as DataFrames and Project Tungsten. He is also a co-founder and Chief Architect at Databricks.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/lewis-mbae/#
|
Lewis Mbae - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLewis MbaeHead of Customer Engineering at RudderStackBack to speakersLewis leads Customer Engineering at RudderStack. His team's core focus is to be the trusted technical advisor for all RudderStack customers throughout their journey on the platform. Prior to RudderStack he spent 7 years at Fastly where he held senior roles in Sales Engineering. On the personal side, he grew up in Kenya and moved to the United States to attend college. He has a MS in Computer Science from Columbia as well as a BS in Electrical Engineering from StanfordLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/nihar-sheth/#
|
Nihar Sheth - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingNihar ShethSenior Product Manager at AWS - AmazonBack to speakersNihar Sheth is a Senior Product Manager on the Amazon Kinesis Data Streams team at Amazon Web Services. He is passionate about developing intuitive product experiences that solve complex customer problems and enables customers to achieve their business goals.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/willy-lulciuc
|
Willy Lulciuc - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingWilly LulciucSr. Software Engineer at AstronomerBack to speakersWilly Lulciuc is a Software Engineer at Astronomer working on observability and lineage. He makes datasets discoverable and meaningful with metadata. He co-created Marquez and is now involved in the OpenLineage initiative. Previously, he was the Founder Engineering of Datakin, a data lineage startup. When he's not reviewing code and creating indirections, he can be found experimenting with analog synthesizers.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/rahil-bhatnagar/#
|
Rahil Bhatnagar - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRahil BhatnagarDevelopment Lead, LOLA at Anheuser BuschBack to speakersRahil Bhatnagar has experience leading cross-functional teams to build scalable products, taking them from ideas to production. Applying his distributed systems and game development background to create sustainable dynamic solutions on time. Currently, leading and scaling Anheuser-Busch's Machine Learning Platform, LOLA to solve the growing demand for machine learning insights in a tech-first FMCPG.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/glossary/digital-twin
|
Digital TwinPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDigital TwinAll>Digital TwinTry Databricks for freeGet StartedWhat is a Digital Twin?The classical definition of of digital twin is; ""A digital twin is a virtual model designed to accurately reflect a physical object."" – IBM[KVK4] For a discrete or continuous manufacturing process, a digital twin gathers system and processes state data with help of various IoT sensors (operational technology data (OT)) and enterprise data (informational technology (IT)) to forms a virtual model which is then used to run simulations, study performance issues, and generate possible insights.The concept of Digital Twins is not new. In fact, it is reported that the first application was over twenty five years ago, during the early phases of foundation and cofferdam construction for the London Heathrow Express facilities, to monitor and predict foundation borehole grouting. In the years since this first application, edge computing, AI, data connectivity, 5G connectivity, and the improvements of the Internet of Things (IoT) have enabled digital twins to become cost-effective and are now an imperative in today's data-driven businesses.Digital twins are now so ingrained in Manufacturing that the global industry market is forecasted to reach $48Bn dollars in 2026. This figure is up from $3.1 Bn in 2020 at a CAGR of 58%, riding on the wave of Industry 4.0.Today's manufacturing industries are expected to streamline and optimize all the processes in their value chain from product development and design, through operations and supply chain optimization to obtaining customer feedback to reflect and respond to rapidly growing demands swiftly. The digital twins category is broad and is addressing a multitude of challenges within manufacturing, logistics and transportation.The most common challenges faced by the manufacturing industry that digital twins are addressing are:Product designs are more complex, resulting in higher cost, and increasingly longer development timesThe supply chain is opaqueProduction lines are not optimized – performance variations, unknown defects and projection of operating cost is obscurePoor quality management – over reliance on theory, managed by individual departments,Reactive maintenance costs are too high resulting in excessive downtime or process disruptionsIncongruous collaborations between the departmentsInvisibility of customer demand for gathering real time feedbackWhy is this important?Industry 4.0 and the subsequent Intelligent Supply Chain efforts have made significant strides in improving operations and building agile supply chains - but these efforts would have come at significant costs without digital twin technology. Can you imagine the cost to change an oil refinery's crude distillation unit process conditions to improve the output of diesel one week and gasoline the next to address changes in demand and insure maximum economic value? Can you imagine how to replicate an even simple supply chain to model risk? It is financially and physically impossible to build a physical twin of a supply chain.Let's look at the benefits that digital twins deliver to the manufacturing sector:Product design and development is performed with less cost and is completed in less time as iterative simulations, using multiple constraints deliver the best or most optimized design - all commercial aircraft are designed using digital twinsDigital twins provide us with the awareness of how long inventory will last, when to replenish, and how to minimize the supply chains disruptions - the oil and gas industry uses supply chain oriented digital twins to reduce supply chain bottlenecks in storage and mid-stream delivery, schedule tanker off-loads and model demand with externalities.Continuous quality checks on produced items with ML/AI generated feedbacks pre-emptively assures improved product quality - automotive final paint inspection is performed with computer vision built on top of digital twin technologyStriking the sweet spot between when to replace a part before the process degrades or breaks down and utilizing the components to its fullest, digital twin provides with a real time feedback - digital twins are the backbone of building an asset performance management suiteDigital twins create the opportunity to have multiple departments in sync by providing necessary instructions modularly to attain a required throughput - digital twins are the backbone of kaizen events that optimizes manufacturing process flowCustomer feedback loops can be modeled through inputs, from point of sale customer behavior, buying preferences, or product performance and then integrated into the product development process, forming a closed loop providing an improved product designWhat are Databricks' differentiated capabilities?Databricks' Lakehouse uses technologies that include Delta, Delta Live Tables, Autoloader and Photon to enable customers to make data available for real-time decisions.Lakehouse for MFG supports the largest data jobs at near real-time intervals. For example, customers are bringing nearly 400 million events per day from transactional log systems at 15-second intervals. Because of the disruption to reporting and analysis that occurs during data processing, most retail customers load data to their data warehouse during a nightly batch. Some companies are even loading data weekly or monthly.A Lakehouse event-driven architecture provides a simpler method of ingesting and processing batch and streaming data than legacy approaches, such as lambda architectures. This architecture handles the change data capture and provides ACID compliance to transactions.Delta Live Tables simplifies the creation of data pipelines and automatically builds in lineage to assist with ongoing management.The Lakehouse allows for real-time stream ingestion of data and analytics on streaming data. Data warehouses require the extraction, transformation, loading, and additional extraction from the data warehouse to perform any analytics.Photon provides record-setting query performance, enabling users to query even the largest of data sets to power real-time decisions in BI tools.Additional ResourcesFour Forces Driving Intelligent ManufacturingManufacturing Leaders ForumDatabricks solutions for ManufacturingBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/sean-owen/#
|
Sean Owen - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSean OwenPrincipal Product Specialist at DatabricksBack to speakersSean is a Principal Product Specialist for Data Science and Machine Learning at Databricks. He is an Apache Spark committer and PMC member, and co-author Advanced Analytics with Spark. Previously, he was director of Data Science at Cloudera and an engineer at Google, and worked in early-stage technology venture investing.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/blog/2022/02/15/lakehouse-for-financial-services-paving-the-way-for-data-driven-innovation-in-fsis.html
|
Lakehouse for Financial Services: Paving the Way for Data-Driven Innovation in FSIs - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorLakehouse for Financial Services: Paving the Way for Data-Driven Innovation in FSIsby Antoine Amend and Junta NakaiFebruary 15, 2022 in NewsShare this postWhen it comes to “data-driven innovation,” financial service institutions (FSI) aren’t what typically come to mind. But with massive amounts of data at their potential disposal, this isn’t for lack of imagination. FSIs want to innovate but are continually slowed down by complex legacy architectures and vendor lock-in that prevent data and AI from becoming material business drivers.
Largely as a result of these challenges, the financial services industry has arguably seen little innovation in recent decades – even as other regulated sectors such as healthcare and education continue to break barriers. Even for the most established incumbents, a lack of innovation can quickly lead to being taken over by a new, digital-native company – a move some of us at Databricks call Tesla-fication. This is where one disruptive, data and AI-driven innovator becomes disproportionately more successful than the incumbents who previously dominated the space. One indication of this success can be found in the stock market. Today, Tesla boasts a $900+ billion market capitalization, making it worth more than the next 10 leading automotive competitors combined. Incumbency is no longer a moat.
In fact, we’re already starting to see Tesla-fication happening in financial services. Nubank, a Brazilian fintech launched in 2014, has quickly changed the competitive dynamics in its home country and beyond. Early on, Nubank disrupted the credit card market, by enabling online applications, as well as by extending credit to those with no credit history. Today, it uses bleeding-edge technology, data and AI to develop new products and services. Data science plays an essential role in every aspect of their business – from customer support to credit lines. Seven years after their launch, in December of 2021, Nubank became one of the largest IPOs in Latin America and briefly eclipsed the market capitalization of Brazil’s largest bank. Signs of Tesla-fication are emerging across all segments of financial services, from banking to insurance to capital markets. For FSIs, this means that the traditional sources of competitive advantage – capital and scale – no longer cut it. Today, transformation requires leaders to focus their investments on two modern sources of competitive advantage: data and people.
Introducing Lakehouse for Financial ServicesToday, we’re thrilled to introduce Lakehouse for Financial Services to help bring data and people together for every FSI. Lakehouse for Financial Services addresses the unique requirements of FSIs via industry-focused capabilities, such as pre-built solutions accelerators, data sharing capabilities, open standards and certified implementation partners. With this platform, organizations across the banking, insurance and capital market sectors can increase the impact and time-to-value of their data assets, ultimately enabling data and AI to become central to every part of their business – from lending to insuring.
So, why is Lakehouse for Financial Services critical for success? When speaking with our customers, we identified the biggest challenges around transforming into a data-driven organization (and how Lakehouse addresses them):
Risk of vendor lock-in: FSIs are particularly vulnerable to being stuck with proprietary data formats and technologies that stifle collaboration and innovation. Lakehouse is powered by open source and open standards, meaning that data teams can leverage the tools of their choice.No multi-cloud: Increasingly, regulators are asking FSIs to consider systemic risk arising from overreliance on a single vendor. Lakehouse solves this by offering full support for all major cloud vendors.Real-time data access for BI: The most recent data is typically the most valuable, but traditional architectures often make it a hurdle for data analysts to access it. With Lakehouse, data teams across functions always can access the most up-to-date, reliable data.Lack of support for all data sets: The fastest-growing data in FSIs is unstructured data sets (text, images, etc), which makes data warehouses less than ideal for critical use cases. Lakehouse handles all types of data - structured, semi-structured and unstructured - and even offers data sharing capabilities with leading providers such as Factset.Driving AI use cases. Although the regulated aspect of financial services makes it difficult to embrace and scale AI, the main hurdles are internal policies around risk adversity coupled with siloed infrastructures and legacy processes. Lakehouse makes AI accessible and transparent via MLflow; coupled with Delta Lake time travel capability, AI has been adopted as a next generation of model risk management for independent validation.What makes Lakehouse for Financial Services Equipped to Tackle These Challenges?We built Lakehouse for Financial Services specifically to tackle these challenges and empower organizations to find new ways to gain a competitive edge, innovate risk management and more, even within highly-regulated environments. Here’s how we’re doing just that:
Pre-built Solution Accelerators for Financial Services Use CasesLakehouse for Financial Services aligns with our 14 financial services solution accelerators, fully functional and freely available notebooks that tackle the most common and high-impact use cases that our customers are facing. These use cases include:
Post-Trade Analysis and Market Surveillance: Using an efficient time series processing engine for market data, this library combines core market data and disparate alternative data sources, enabling asset managers to backtest investing strategies at scale and efficiently report on transaction cost analysis.Transaction Enrichment: This scalable geospatial data library enables hyper-personalization in retail banking to better understand customer transaction behavior required for next-gen customer segmentation and modern fraud prevention strategies.Regulatory Reporting: This accelerator streamlines the acquisition, processing and transmission of regulatory data following open data standards and open data sharing protocols.GDPR Compliance: Simplify the technical challenges around compliance to the “right to be forgotten” requirement while ensuring strict audit capabilities.Common Data Models: A set of frameworks and accelerators for common data models to address the challenges FSIs have in standardizing data across the organization.Industry Open Source ProjectsAs part of this launch, we’re thrilled to announce that we have joined FINOS (FinTech Open Source Foundation) to foster innovation and collaboration in financial services. FINOS includes the world’s leading FSIs such as Goldman Sachs, Morgan Stanley, UBS and JP Morgan as members. Open Source has become a core strategic initiative for data strategies in financial services as organizations look to avoid complex, costly vendor lock-in and proprietary data formats. As part of FINOS, Databricks is helping to facilitate the processing and exchange of financial data throughout the entire banking ecosystem. This is executed via our Delta Lake and Delta Sharing integrations with recent open source initiatives led by major FSIs.
Databricks is working to help empower the standardization of data by significantly democratizing data accessibility and insights. Ultimately, we want to bring data to the masses. That’s why we recently integrated the LEGEND ecosystem with Delta Lake functionalities such as Delta Live Tables. Developed by leading financial services institutions and subsequently open-sourced through the LINUX Foundation, the LEGEND ecosystem allows domain experts and financial analysts to map business logic, taxonomy and financial calculations to data. Now integrated into the Lakehouse for Financial Services, those same business processes can be directly translated into core data pipelines to enforce high-quality standards with minimum operation overhead. Coupled with the Lakehouse query layer, this integration provides financial analysts with massive amounts of real-time data directly through the comfort of their business applications and core enterprise services.
Simple deployment of the Lakehouse environmentWith Lakehouse for Financial Services, customers can easily automate security standards. More specifically, the utility libraries and scripts we’ve created for financial services deliver automated setup for notebooks and are tailored to help solve security and governance issues important to the financial services industry based on best practices and patterns from our 600+ customers.
A data model framework for standardizing dataIn addition to solution accelerators, Lakehouse provides a framework for common data models to address the challenges FSIs have in standardizing data across the organization. For example, one solution accelerator is designed to easily integrate the Financial Regulation (FIRE) Data model to drive the standardization of data, serve data to downstream tools, enable AI quality checks and govern the data using Unity Catalog.
Open data sharingLast year, we launched Delta Sharing, the world’s first open protocol for securely sharing data across organizations in real-time, independent of the platform on which the data resides. This is largely powered by our incredible ecosystem of partners, which we’re continuing to scale and grow. We are thrilled to announce that we have recently invested in Ticksmith, a leading SaaS platform that simplifies the online data shopping experience and was one of the first platforms to implement Delta Sharing. With the TickSmith and Databricks integration, FSIs can now easily create, package and deliver data products in a unified environment.
Implementation PartnersDatabricks is working with consulting and SI partner Avanade to deliver risk management solutions to financial institutions. Built on Azure Databricks, our joint solution makes it easier for customers to rapidly deploy data into value-at-risk models to keep up with emerging risks and threats. By migrating to the cloud and modernizing data-driven risk models, financial institutions are able to reduce regulatory, operational compliance risks related and scale to meet increased throughput.
Databricks is also partnering with the Deloitte FinServ Governed Data Platform, a cloud-based, curated data platform meeting regulatory requirements that builds a single source of truth for financial institutions to intelligently organize data domains and approved provisioning points, enabling activation of business intelligence, visualization, predictive analytics, AI/ML, NLP and RPA.
ConclusionTesla-fication is starting to happen all around us. Lakehouse for Financial Services is designed to help our customers make a leapfrog advancement in their data and AI journey with pre-built solution accelerators, data sharing capabilities, open standards and certified implementation partners. We are on the mission to help every FSI become the Tesla of their industry.
Want to learn more? Check out this overview and see how you can easily get started or schedule a demo.Try Databricks for freeGet StartedRelated postsLakehouse for Financial Services: Paving the Way for Data-Driven Innovation in FSIsFebruary 15, 2022 by Antoine Amend and Junta Nakai in News
When it comes to “data-driven innovation,” financial service institutions (FSI) aren’t what typically come to mind. But with massive amounts of data at their...
See all News postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/pouya-barrach-yousefi
|
Pouya Barrach-Yousefi - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingPouya Barrach-YousefiData Pro and Director Strategic Accounts at ProphecyBack to speakersIn the 6 years at IQVIA prior to joining Prophecy, Pouya was a Data Science Developer and tech lead for the Analytics Center of Excellence, then joined the global Data Science & Advanced Analytics team as an Associate Data Science Director to focus on delivering commercial AIML solutions for pharma clients, and finally as Director of Enterprise AIML Strategy he led data, data science, and machine learning improvements across IQVIA.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/jp/product/marketplace
|
Databricks Marketplace | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack.
ご登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行バイオ医薬品企業の経営陣を対象に行った新たな調査により、リアルワールドエビデンス(RWE)の活用による成果が明らかになりました。
アンケートの結果を見る学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeaconsJoin Generation AI in San Francisco June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessions導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29ご登録Databricks MarketplaceOpen marketplace for data, analytics and AI無料トライアルTo play this video, click here and accept cookiesWhat is Databricks Marketplace?
Databricks Marketplace is an open marketplace for all your data, analytics and AI, powered by open source Delta Sharing standard. The Databricks Marketplace expands your opportunity to deliver innovation, and advance all your analytics and AI initiatives.
Obtain data sets as well as AI and analytics assets — such as ML models, notebooks, applications and dashboards — without proprietary platform dependencies, complicated ETL or expensive replication. This open approach allows you to put data to work more quickly in every cloud with your tools of choice.Discover more than just dataUnlock innovation and advance your organization’s AI, ML and analytics initiatives. Access more than just data sets, including ML models, notebooks, applications and solutions.Evaluate data products fasterPrebuilt notebooks and sample data help you quickly evaluate and have much greater confidence that a data product is right for your AI, ML or analytics initiatives.Avoid vendor lock-inSubstantially reduce the time to deliver insights and avoid lock-in with open and seamless sharing and collaboration across clouds, regions or platforms. Directly integrate with your tools of choice and right where you work.Featured data providers on Databricks MarketplaceBecome a data provider on the marketplace
関連リソースEventRegister now for Data + AI Summit 2023ご登録eBookA New Approach to Data SharingダウンロードeBookData, Analytics and AI Governanceダウンロード基調講演Data Governance and Sharing on the Lakehouse at Data + AI Summit 2022 ブログMarketplace Public Preview AnnouncementDatabricks Marketplace Announcement at Data + AI Summit 2022ドキュメントAWSAzureGCP製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利
|
https://www.databricks.com/p/ebook/the-big-book-of-data-science-use-cases-nurture
|
The Big Book of Data Science Use Cases – 2nd Edition | DatabrickseBookThe Big Book of Data Science Use Cases – 2nd EditionSee top use cases for applying data science in businesses across industriesThe world of data science is evolving so fast that it’s not easy to find real-world use cases that are relevant to what you’re working on. That’s why we’ve collected these blogs from industry thought leaders with practical use cases you can put to work right now. This how-to reference guide provides everything you need — including code samples — so you can get your hands dirty working with the Databricks platform.In this eBook, you will learn:Top ways to apply data science so it has an impact on your businessHow-to walk-throughs using code samples to recreate data science use casesCustomer stories where users are seeing success from using DatabricksComplete the formProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/p/webinar/databricks-onboarding-sessions?utm_source=databricks&utm_medium=website&utm_campaign=7013f000000LkrvAAC&utm_content=learn-page
|
Resources - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWResourcesLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/de/events
|
Veranstaltungen von Databricks | DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT
Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse.
Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt.
Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT
Entdecken Sie das Lakehouse für die Fertigung
Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023
Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWVeranstaltungen von DatabricksInformieren Sie sich über anstehende Meetups, Webinare, Konferenzen und mehr von und zu Databricks.Data+AI Summit 202326.–29. JuniSie haben die Wahl: Nehmen Sie persönlich teil oder verfolgen Sie Keynotes und ausgewählte Sessions im Livestream.Jetzt registrierenLoading...Browse All Upcoming EventsProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter
„Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte
|
https://www.databricks.com/dataaisummit/speaker/lin-qiao/#
|
Lin Qiao - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLin QiaoCo-creator of PyTorch, Co-founder and CEO at FireworksBack to speakersLin Qiao is the co-founder and CEO of Fireworks, which is on a mission to accelerate the transition to AI-powered business via interactive experimentation and a production platform centered around PyTorch technologies and state-of-the-art models built using PyTorch. She led the development of PyTorch, AI compilers and on-device AI platforms at Meta for the past half-decade. She drove AI research to production innovations across hardware acceleration, enabling model exploration and large and complex model scaling, building production ecosystems and platforms for all Meta’s AI use cases. She received a Ph.D. in computer science, started her career as a researcher in Almaden Research Lab, and later moved to the industry as an engineer. Prior to Meta, she worked on a broad range of distributed and data processing domains, from high-performance columnar databases, OLTP systems, streaming processing systems, data warehouse systems, and logging and metrics platforms.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/it/solutions/industries/healthcare-and-life-sciences
|
Lakehouse for Healthcare and Life Sciences - DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT
Addio, Data Warehouse. Ciao, Lakehouse.
Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno.
Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT
Scopri la Lakehouse for Manufacturing
Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons
26–29 giugno 2023
Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWLakehouse for Healthcare and Life SciencesOffrire esiti migliori ai pazienti con la potenza dei dati e dell'AICominciaProgramma una demoQuattro sfide per la gestione dei dati nella sanità e nelle bioscienzeFrammentazione dei dati dei pazienti
Silos di dati e supporto limitato per dati non strutturati impediscono alle organizzazioni di seguire il percorso del paziente.Crescita rapida dei dati sanitari
Le architetture on-premise esistenti sono complesse da gestire e costose da espandere per l'enorme volume di dati sanitari attuale, alimentato anche dalla crescita di immagini e genomica.Cure e operazioni in tempo reale
Data warehouse e strumenti scollegati impediscono di fornire le informazioni in tempo reale necessarie per prendere decisioni terapeutiche critiche e per produrre e somministrare in modo sicuro terapie importanti.Analisi sanitaria avanzata complessa
Funzionalità di ML leggere impediscono alle organizzazioni di gestire tutti gli aspetti, dai modelli di nuova generazione per la cura dei pazienti, all'analisi predittiva nella ricerca e sviluppo di farmaci.Lakehouse for Healthcare and Life SciencesPiattaforma unificata per dati e AI
Un'unica piattaforma che raccoglie tutti i dati e i carichi di lavoro di analisi per favorire innovazioni trasformative nella cura dei pazienti e nella R&D di farmaci.
Soluzioni dei partner
I principali fornitori mondiali di soluzioni per la sanità e le bioscienze, come Deloitte, Accenture e ZS Associates, sviluppano per il lakehouse. Sfrutta i vantaggi di offerte preconfezionate che accelerano la trasformazione guidata dai dati nella ricerca farmaceutica e nella cura dei pazienti.
Acceleratori
Databricks e i suoi partner hanno realizzato una serie di Acceleratori che facilitano l'acquisizione di dati sanitari, ad esempio messaggi HL7, e consentono di realizzare casi d'uso come l'analisi di testi medici e il monitoraggio della sicurezza dei farmaci.
Collaborazione con il settore
La condivisione di dati sicura e aperta e la collaborazione con organizzazioni dell'ecosistema sanitario aiutano ad accelerare la ricerca di farmaci salvavita e migliorare la somministrazione di cure.
Entrare nel futuro delle cure con il lakehouse
“Databricks Lakehouse for Healthcare and Life Sciences mette a disposizione di GE Healthcare una piattaforma moderna, aperta e collaborativa per avere una vista completa del paziente lungo tutto il percorso di cura. Grazie a queste funzionalità abbiamo ridotto i costosi silos di dati esistenti e dotato i nostri team di informazioni approfondite, tempestive e precise”.— Joji George, Chief Technology Officer, LCS Digital, GE HealthcarePerché Lakehouse for Healthcare and Life Sciences?
Accelerare la ricerca e migliorare gli esiti per i pazienti su una piattaforma collaborativa aperta per dati e AIVista del paziente a 360°
Raccogli tutti i dati strutturati e non strutturati (pazienti, R&D e operazioni) su un'unica piattaforma per analisi e AI. Con una vista completa del percorso del paziente, le organizzazioni possono fornire terapie più personalizzate.Scala infinita per studi a livello di popolazione
Con una piattaforma scalabile in cloud si possono analizzare i dati di milioni di pazienti in modo veloce e affidabile. Acquisendo dati che riguardano tutta la popolazione, le organizzazioni possono avere una vista più completa delle evoluzioni sanitarie e, di conseguenza, sviluppare terapie migliori.Analisi in tempo reale, operatività in tempo reale
I dati in streaming possono essere acquisiti da qualsiasi sorgente ed elaborati per alimentare analisi in tempo reale, con applicazioni che vanno dalla gestione della capacità di posti letto negli ospedali all'ottimizzazione della produzione e della distribuzione di farmaci.Scoperta di farmaci e cura dei pazienti supportate da ML
Sfrutta la potenza del machine learning per conoscere meglio le malattie e prevedere le esigenze sanitarie. Tutti i dati sono connessi direttamente a una suite completa di strumenti di collaborazione per analisi avanzate.Scarica l'eBookPartner e soluzioniComincia a operare con una gamma di soluzioni di gestione e analisi dei dati e di template specifici per il settore della sanità e delle bioscienzeSanità intelligenteMigliorare l'esperienza dei pazienti e gli esiti clinici personalizzando il percorso del paziente e l'accesso alle cure.CominciaPrecisionView™Ampliare le funzionalità, aumentare la capacità e arricchire la collaborazione interna nel settore della sanità e delle bioscienze.CominciaInteroperabilità dei dati sanitariAutomatizzare l'acquisizione di bundle FHIR in streaming all'interno del lakehouse per le analisi dei pazienti a valle su larga scala.CominciaCominciaGestione intelligente dei dati per la ricerca biomedicaleTrasformare i dati scientifici in un patrimonio aziendale creando una catena del valore completa.CominciaScopri tutte le soluzioni dei partnerModelli di dati e costruzione di coorti: OMOP e Propensity Score MatchingAcquisisci e standardizza facilmente dati reali nel lakehouse per analisi osservazionale su larga scalaCominciaInteroperabilità: Acquisizione di messaggi HL7V2Automatizza l'acquisizione di messaggi HL7V2 in streaming all'interno del lakehouse per analisi in tempo realeCominciaInteroperabilità: Acquisizione di bundle FHIRAutomatizza l'acquisizione di bundle FHIR in streaming all'interno del lakehouse per le analisi dei pazienti a valleCominciaImmagini: Classificazione digitale delle patologiePotenziare i flussi di lavoro diagnostici con il deep learning individuando metastasi in immagini digitaliCominciaR&D: Identificazione di bersagli farmacologiciAnalizzare le associazioni genetiche su larga scala per aiutare i team di R&D a individuare nuovi bersagli farmacologiciCominciaSalute della popolazione: Previsione del rischio di malattieCostruzione di modelli predittivi del rischio di malattie per migliorare i programmi di gestione delle cureCominciaNLP: Rilevazione di eventi avversiMigliora il monitoraggio della sicurezza dei farmaci attraverso il rilevamento di eventi avversi utilizzando la nostra soluzione NLP sviluppata congiuntamente con John Snow LabsCominciaNLP: Estrarre dati oncologici dal mondo realeTrasformare note oncologiche non strutturate in nuove informazioni sui pazienti utilizzando la nostra soluzione NLP sviluppata congiuntamente con John Snow LabsCominciaNLP: Automatizzare la rimozione di dati sanitari protetti (PHI)La rimozione di dati sensibili dei pazienti dai dati testuali può essere automatizzata con la soluzione NLP sviluppata insieme a John Snow LabsCominciaVedi tutte le soluzioniLakehouse for Healthcare and Life Sciences in azioneVuoi saperne di più?Sanità
Fornire cure mirate al paziente con l'efficacia dei dati e dell'AIBioscienze
Portare nuove cure ai pazienti con analisi dei dati e AIMigliorare gli esiti in sanità con dati e AIScarica l'eBook →Risorse
Tutte le risorse di cui hai bisogno in un unico posto.
Esplora la libreria di risorse per trovare e-book e video su dati e AI per sanità e bioscienze.
Esplora risorseeBookMigliorare gli esiti con Lakehouse for Healthcare and Life SciencesScoprire nuove informazioni sui pazienti applicando l'elaborazione del linguaggio naturale (NLP) su larga scalaMantenere la promessa di dati raccolti dal mondo reale con il lakehouseScheda tecnica: Presentazione del lakehouse per sanità e bioscienzeWebinarWorkshop: FHIR migliora l'analisi dei pazienti in tempo reale con il lakehouseWebinar: Promuovere le innovazioni in sanità con il sistema informatico regionale di Chesapeake per i nostri pazientiWorkshop: Standardizzare i dati con OMOP e prevedere il rischio di malattie con il MLWorkshop: Estrarre dati reali con NLPWorkshop: Accelerare la ricerca e sviluppo con dati reali e AIBlogAmgen accelera lo sviluppo e la fornitura di farmaci con il lakehouse per sanità e bioscienzePresentazione del lakehouse per sanità e bioscienzeAumentare la sicurezza dei farmaci individuando gli eventi avversi con la tecnologia NLP (elaborazione del linguaggio naturale)Il toolkit open-source di genomica di Databricks supera le prestazioni degli strumenti più diffusiEstrazione di insight oncologici da dati clinici reali con NLPPronto per cominciare?Vorremmo conoscere i tuoi obiettivi aziendali e come il nostro team di servizi potrebbe aiutarti a realizzarli.Prova Databricks gratisProgramma una demoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte
in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California
|
https://www.databricks.com/dataaisummit/speaker/erin-boelkens/#
|
Erin Boelkens - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingErin BoelkensVP of Product at LiveRampBack to speakersErin Boelkens is Vice President of Product at LiveRamp (NYSE: RAMP). In this role, she oversees LiveRamp solutions that enable clients to manage data assets in a safe and secure way across identity, business development, addressability, healthcare, and data management. Previously, Erin was LiveRamp’s VP of Engineering and Head of Global Identity Engineering, where she led a team of creative and innovative engineers providing industry-leading identity products across offline and online channels. Erin joined the company in 2018 after spending 13 years in engineering, product and data science at Acxiom (Nasdaq: ACXM). Erin holds a bachelor’s degree in computer information systems and a master’s of science in management information systems from Arkansas State University. She is also certified as a Scrum Product Owner and a Scrum Master from the Scrum Alliance, as well as Marketing certification from Pragmatic Marketing. She resides in Little Rock, Arkansas.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/pakshal-kumar-h-dhelaria/#
|
Pakshal Kumar H Dhelaria - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingPakshal Kumar H DhelariaSenior Software Engineer 1 at CitrixBack to speakersA strong leader with an overall experience of 6+ years solving critical problems in the industry and passionate about innovative architecture/solutions. Expertise in Kafka, Apache Spark, Streaming, Spring, Spring Boot Framework Interested in Machine learning and NLP. Worked on RDBMS (MySQL, Postgres,), Time-Series DB(Druid). Extensively using Functional/Reactive Programming, Java, NodeJS.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/it/glossary
|
Glossaries Archive | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWGlossaryA-ZsearchACID Transactions What is a transaction?
In the context of databases and data storage systems, a transaction is any operation that is treated as a single unit of work, which either completes fully or does not complete at all, and leaves the storage system in a cons{...}AdaGrad Gradient descent is the most commonly used optimization method deployed in machine learning and deep learning algorithms. It’s used to train a machine learning model.
Types of Gradient Descent
There are three primary types of gradient descent {...}Alternative Data What is Alternative Data?
Alternative data is information gathered by using alternative sources of data that others are not using; non-traditional information sources. Analysis of alternative data can provide insights beyond that which an in{...}Anomaly Detection Anomaly Detection is the technique of identifying rare events or observations which can raise suspicions by being statistically different from the rest of the observations. Such “anomalous” behavior typically translates to some kind of a problem like{...}Apache Hive What is Apache Hive?
Apache Hive is open-source data warehouse software designed to read, write, and manage large datasets extracted from the Apache Hadoop Distributed File System (HDFS) , one aspect of a larger Hadoop Ecosystem.
With exten{...}Apache Kudu What is Apache Kudu?
Apache Kudu is a free and open source columnar storage system developed for the Apache Hadoop. It is an engine intended for structured data that supports low-latency random access millisecond-scale access to individual rows to{...}Apache Kylin What is Apache Kylin?
Apache Kylin is a distributed open source online analytics processing (OLAP) engine for interactive analytics Big Data. Apache Kylin has been designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/S{...}Apache Spark What Is Apache Spark?
Apache Spark is an open source analytics engine used for big data workloads. It can handle both batches as well as real-time analytics and data processing workloads. Apache Spark started in 2009 as a research project at {...}Apache Spark as a Service What is Apache Spark as a Service?
Apache Spark is an open source cluster computing framework for fast real-time large-scale data processing. Since its inception in 2009 at UC Berkeley’s AMPLab, Spark has seen major growth. It is currently ra{...}Artificial Neural Network What is an Artificial Neural Network?
An artificial neuron network (ANN) is a computing system patterned after the operation of neurons in the human brain.
How Do Artificial Neural Networks Work?
Artificial Neural Networks can be best viewed{...}Automation Bias What is Automation Bias?
Automation bias is an over-reliance on automated aids and decision support systems. As the availability of automated decision aids is increasing additions to critical decision-making contexts such as intensive care units, {...}Bayesian Neural Network What Are Bayesian Neural Networks?
Bayesian Neural Networks (BNNs) refers to extending standard networks with posterior inference in order to control over-fitting. From a broader perspective, the Bayesian approach uses the statistical methodology {...}Big Data Analytics The Difference Between Data and Big Data Analytics
Prior to the invention of Hadoop, the technologies underpinning modern storage and compute systems were relatively basic, limiting companies mostly to the analysis of "small data." Even this relat{...}Bioinformatics Bioinformatics is a field of study that uses computation to extract knowledge from large collections of biological data.
Bioinformatics refers to the use of IT in biotechnology for storing, retrieving, organizing and analyzing biological data.{...}Catalyst Optimizer At the core of Spark SQL is the Catalyst optimizer, which leverages advanced programming language features (e.g. Scala’s pattern matching and quasi quotes) in a novel way to build an extensible query optimizer. Catalyst is based on functional program{...}Complex Event Processing What is Complex Event Processing [CEP]?
Complex event processing [CEP] also known as event, stream or event stream processing is the use of technology for querying data before storing it within a database or, in some cases, without it ever being s{...}Continuous Applications Continuous applications are an end-to-end application that reacts to data in real-time. In particular, developers would like to use a single programming interface to support the facets of continuous applications that are currently handled in separate{...}Convolutional Layer In deep learning, a convolutional neural network (CNN or ConvNet) is a class of deep neural networks, that are typically used to recognize patterns present in images but they are also used for spatial data analysis, computer vision, natural language {...}Data Analysis Platform What is a Data Analysis Platform?
A data analytics platform is an ecosystem of services and technologies that needs to perform analysis on voluminous, complex and dynamic data that allows you to retrieve, combine, interact with, explore, and visua{...}Data Governance What is Data Governance?
Data governance is the oversight to ensure data brings value and supports the business strategy. Data governance is more than just a tool or a process. It aligns data-related requirements to the business strategy using a f{...}Data Lakehouse What is a Data Lakehouse?
A data lakehouse is a new, open data management architecture that combines the flexibility, cost-efficiency, and scale of data lakes with the data management and ACID transactions of data warehouses, enabling business int{...}Data Mart What is a data mart?
A data mart is a curated database including a set of tables that are designed to serve the specific needs of a single data team, community, or line of business, like the marketing or engineering department. It is normally smal{...}Data Sharing What is data sharing?
Data sharing is the ability to make the same data available to one or many consumers. Nowadays, the ever-growing amount of data has become a strategic asset for any company. Sharing data - within your organization or external{...}Data Vault What is a data vault?
A data vault is a data modeling design pattern used to build a data warehouse for enterprise-scale analytics. The data vault has three types of entities: hubs, links, and satellites.
Hubs represent core business concepts, {...}Data Warehouse What is a data warehouse?
A data warehouse is a data management system that stores current and historical data from multiple sources in a business friendly manner for easier insights and reporting. Data warehouses are typically used for business i{...}Databricks Runtime Databricks Runtime is the set of software artifacts that run on the clusters of machines managed by Databricks. It includes Spark but also adds a number of components and updates that substantially improve the usability, performance, and security of {...}DataFrames What is a DataFrame?
A DataFrame is a data structure that organizes data into a 2-dimensional table of rows and columns, much like a spreadsheet. DataFrames are one of the most common data structures used in modern data analytics because they are {...}Datasets Datasets are a type-safe version of Spark's structured API for Java and Scala. This API is not available in Python and R, because those are dynamically typed languages, but it is a powerful tool for writing large applications in Scala and Java. Recal{...}Deep Learning What is Deep Learning?
Deep Learning is a subset of machine learning concerned with large amounts of data with algorithms that have been inspired by the structure and function of the human brain, which is why deep learning models are often referre{...}Demand Forecasting What is demand forecasting?
Demand forecasting is the process of projecting consumer demand (equating to future revenue). Specifically, it is projecting the assortment of products shoppers will buy using quantitative and qualitative data.
Ret{...}Dense Tensor Dense tensors store values in a contiguous sequential block of memory where all values are represented. Tensors or multi-dimensional arrays are used in a diverse set of multi-dimensional data analysis applications. There are a number of software prod{...}Digital Twin What is a Digital Twin?
The classical definition of of digital twin is; ""A digital twin is a virtual model designed to accurately reflect a physical object."" – IBM[KVK4] For a discrete or continuous manufacturing process, a digital twin gathers {...}DNA Sequence What is a DNA Sequence?
The DNA sequence is the process of determining the exact sequence of nucleotides of DNA (deoxyribonucleic acid). Sequencing DNA the order of the four chemical building blocks - adenine, guanine, cytosine, and thymine {...}Extract Transform Load (ETL) What is ETL?
As the amount of data, data sources, and data types at organizations grow, the importance of making use of that data in analytics, data science and machine learning initiatives to derive business insights grows as well. The need to pr{...}Feature Engineering Feature engineering for machine learning
Feature engineering, also called data preprocessing, is the process of converting raw data into features that can be used to develop machine learning models. This topic describes the principal concepts of f{...}Genomics Genomics is an area within genetics that concerns the sequencing and analysis of an organism's genome. Its main task is to determine the entire sequence of DNA or the composition of the atoms that make up the DNA and the chemical bonds between the DN{...}Hadoop Cluster What Is a Hadoop Cluster?
Apache Hadoop is an open source, Java-based, software framework and parallel data processing engine. It enables big data analytics processing tasks to be broken down into smaller tasks that can be performed in parallel by{...}Hadoop Distributed File System (HDFS) HDFS
HDFS (Hadoop Distributed File System) is the primary storage system used by Hadoop applications. This open source framework works by rapidly transferring data between nodes. It's often used by companies who need to handle and store big data. {...}Hadoop Ecosystem What is the Hadoop Ecosystem?
Apache Hadoop ecosystem refers to the various components of the Apache Hadoop software library; it includes open source projects as well as a complete range of complementary tools. Some of the most well-known tools of{...}Hash Buckets In computing, a hash table [hash map] is a data structure that provides virtually direct access to objects based on a key [a unique String or Integer]. A hash table uses a hash function to compute an index into an array of buckets or slots, from whic{...}Hive Date Function What is a Hive Date Function?
Hive provides many built-in functions to help us in the processing and querying of data. Some of the functionalities provided by these functions include string manipulation, date manipulation, type conversion, conditi{...}Hosted Spark What is Hosted Spark?
Apache Spark is a fast and general cluster computing system for Big Data built around speed, ease of use, and advanced analytics that was originally built in 2009 at UC Berkeley. It provides high-level APIs in Scala, Java, Py{...}Jupyter Notebook What is a Jupyter Notebook?
A Jupyter Notebook is an open source web application that allows data scientists to create and share documents that include live code, equations, and other multimedia resources.
What are Jupyter Notebooks used fo{...}Keras Model What is a Keras Model?
Keras is a high-level library for deep learning, built on top of Theano and Tensorflow. It is written in Python and provides a clean and convenient way to create a range of deep learning models. Keras has become one of {...}Lakehouse for Retail What is Lakehouse for Retail?
Lakehouse for Retail is Databricks’ first industry-specific Lakehouse. It helps retailers get up and running quickly through solution accelerators, data sharing capabilities, and a partner ecosystem.
Lakehouse fo{...}Lambda Architecture What is Lambda Architecture?
Lambda architecture is a way of processing massive quantities of data (i.e. "Big Data") that provides access to batch-processing and stream-processing methods with a hybrid approach. Lambda architecture is used to solv{...}Machine Learning Library (MLlib) Apache Spark’s Machine Learning Library (MLlib) is designed for simplicity, scalability, and easy integration with other tools. With the scalability, language compatibility, and speed of Spark, data scientists can focus on their data problems and mod{...}Machine Learning Models What is a machine learning Model?
A machine learning model is a program that can find patterns or make decisions from a previously unseen dataset. For example, in natural language processing, machine learning models can parse and correctly recogni{...}Managed Spark What is Managed Spark?
A managed Spark service lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. By using such an automation you will be able to quickly create clusters on -demand, m{...}MapReduce What is MapReduce?
MapReduce is a Java-based, distributed execution framework within the Apache Hadoop Ecosystem. It takes away the complexity of distributed programming by exposing two processing steps that developers implement: 1) Map and {...}Materialized views Delta Pipelines / Materialized Views in Databricks Delta
Intro
Delta Pipelines provides a set of APIs and UI for managing the data pipeline lifecycle. This open-source framework helps data engineering teams simplify ETL development, improve dat{...}Medallion Architecture What is a medallion architecture?
A medallion architecture is a data design pattern used to logically organize data in a lakehouse, with the goal of incrementally and progressively improving the structure and quality of data as it flows through ea{...}ML Pipelines Typically when running machine learning algorithms, it involves a sequence of tasks including pre-processing, feature extraction, model fitting, and validation stages. For example, when classifying text documents might involve text segmentation and c{...}MLOps What is MLOps?
MLOps stands for Machine Learning Operations. MLOps is a core function of Machine Learning engineering, focused on streamlining the process of taking machine learning models to production, and then maintaining and monitoring them. M{...}Model Risk Management Model risk management refers to the supervision of risks from the potential adverse consequences of decisions based on incorrect or misused models. The aim of model risk management is to employ techniques and practices that will identify, measure and{...}Neural Network What is a Neural Network?
A neural network is a computing model whose layered structure resembles the networked structure of neurons in the brain. It features interconnected processing elements called neurons that work together to produce an outpu{...}Open Banking What is Open Banking?
Open banking is a secure way to provide access to consumers' financial data, all contingent on customer consent.² Driven by regulatory, technology, and competitive dynamics, Open Banking calls for the democratization of custo{...}Orchestration What is Orchestration?
Orchestration is the coordination and management of multiple computer systems, applications and/or services, stringing together multiple tasks in order to execute a larger workflow or process. These processes can consist of {...}Overall Equipment Effectiveness What is Overall Equipment Effectiveness?
Overall Equipment Effectiveness(OEE) is a measure of how well a manufacturing operation is utilized (facilities, time and material) compared to its full potential, during the periods when it is scheduled to{...}pandas DataFrame
When it comes to data science, it's no exaggeration to say that you can transform the way your business works by using it to its full potential with pandas DataFrame. To do that, you'll need the right data structures. These will help you be as ef{...}Parquet What is Parquet?
Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in {...}Personalized Finance What is Personalized Finance?
Financial products and services are becoming increasingly commoditized and consumers are becoming more discerning as the media and retail industries have increased their penchant for personalized experiences. To remai{...}Predictive Analytics What is Predictive Analytics?
Predictive analytics is a form of advanced analytics that uses both new and historical data to determine patterns and predict future outcomes and trends.
How Does Predictive Analytics Work?
Predictive analytics {...}Predictive Maintenance What is predictive maintenance?
Predictive Maintenance, in a nutshell, is all about figuring out when an asset should be maintained, and what specific maintenance activities need to be performed, based on an asset’s actual condition or state, rath{...}PyCharm PyCharm is an integrated development environment (IDE) used in computer programming, created for the Python programming language. When using PyCharm on Databricks, by default PyCharm creates a Python Virtual Environment, but you can configure to crea{...}PySpark What is PySpark?
Apache Spark is written in Scala programming language. PySpark has been released in order to support the collaboration of Apache Spark and Python, it actually is a Python API for Spark. In addition, PySpark, helps you interface wi{...}Real-Time Retail What is real-time data for Retail?
Real-time retail is real-time access to data. Moving from batch-oriented access, analysis and compute will allow data to be “always on,” therefore driving accurate, timely decisions and business intelligence. {...}Resilient Distributed Dataset (RDD)
RDD was the primary user-facing API in Spark since its inception. At the core, an RDD is an immutable distributed collection of elements of your data, partitioned across nodes in your cluster that can be operated in parallel with a low-level API {...}Snowflake Schema What is a snowflake schema?
A snowflake schema is a multi-dimensional data model that is an extension of a star schema, where dimension tables are broken down into subdimensions. Snowflake schemas are commonly used for business intelligence and re{...}Spark API If you are working with Spark, you will come across the three APIs: DataFrames, Datasets, and RDDs
What are Resilient Distributed Datasets?
RDD or Resilient Distributed Datasets, is a collection of records with distributed computing, which are {...}Spark Applications Spark Applications consist of a driver process and a set of executor processes. The driver process runs your main() function, sits on a node in the cluster, and is responsible for three things: maintaining information about the Spark Application; res{...}Spark Elasticsearch What is Spark Elasticsearch?
Spark Elasticsearch is a NoSQL, distributed database that stores, retrieves, and manages document-oriented and semi-structured data. It is a GitHub open source, RESTful search engine built on top of Apache Lucene and r{...}Spark SQL Many data scientists, analysts, and general business intelligence users rely on interactive SQL queries for exploring data. Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can al{...}Spark Streaming Apache Spark Streaming is the previous generation of Apache Spark’s streaming engine. There are no longer updates to Spark Streaming and it’s a legacy project. There is a newer and easier to use streaming engine in Apache Spark called Structured Stre{...}Spark Tuning What is Spark Performance Tuning?
Spark Performance Tuning refers to the process of adjusting settings to record for memory, cores, and instances used by the system. This process guarantees that the Spark has a flawless performance and also preven{...}Sparklyr What is Sparklyr?
Sparklyr is an open-source package that provides an interface between R and Apache Spark. You can now leverage Spark’s capabilities in a modern R environment, due to Spark’s ability to interact with distributed data with little l{...}SparkR SparkR is a tool for running R on Spark. It follows the same principles as all of Spark’s other language bindings. To use SparkR, we simply import it into our environment and run our code. It’s all very similar to the Python API except that it follow{...}Sparse Tensor Python offers an inbuilt library called numpy to manipulate multi-dimensional arrays. The organization and use of this library is a primary requirement for developing the pytensor library. Sptensor is a class that represents the sparse tensor. A spa{...}Star Schema What is a star schema?
A star schema is a multi-dimensional data model used to organize data in a database so that it is easy to understand and analyze. Star schemas can be applied to data warehouses, databases, data marts, and other tools. The st{...}Streaming Analytics How Does Stream Analytics Work?
Streaming analytics, also known as event stream processing, is the analysis of huge pools of current and “in-motion” data through the use of continuous queries, called event streams. These streams are triggered by a{...}Structured Streaming Structured Streaming is a high-level API for stream processing that became production-ready in Spark 2.2. Structured Streaming allows you to take the same operations that you perform in batch mode using Spark’s structured APIs, and run them in a stre{...}Supply Chain Management What is supply chain management?
Supply chain management is the process of planning, implementing and controlling operations of the supply chain with the goal of efficiently and effectively producing and delivering products and services to the end{...}TensorFlow In November of 2015, Google released its open-source framework for machine learning and named it TensorFlow. It supports deep-learning, neural networks, and general numerical computations on CPUs, GPUs, and clusters of GPUs. One of the biggest advant{...}Tensorflow Estimator API What is the Tensorflow Estimator API?
Estimators represent a complete model but also look intuitive enough to less user. The Estimator API provides methods to train the model, to judge the model’s accuracy, and to generate predictions. TensorFlow {...}Transformations What Are Transformations?
In Spark, the core data structures are immutable meaning they cannot be changed once created. This might seem like a strange concept at first, if you cannot change it, how are you supposed to use it? In order to “change” {...}Tungsten What is the Tungsten Project?
Tungsten is the codename for the umbrella project to make changes to Apache Spark’s execution engine that focuses on substantially improving the efficiency of memory and CPU for Spark applications, to push performance{...}Unified AI Framework Unified Artificial Intelligence or UAI was announced by Facebook during F8 this year. This brings together 2 specific deep learning frameworks that Facebook created and outsourced - PyTorch focused on research assuming access to large-scale compute r{...}Unified Data Analytics Unified Data Analytics is a new category of solutions that unify data processing with AI technologies, making AI much more achievable for enterprise organizations and enabling them to accelerate their AI initiatives. Unified Data Analytics makes it e{...}Unified Data Analytics Platform Databricks' Unified Data Analytics Platform helps organizations accelerate innovation by unifying data science with engineering and business. With Databricks as your Unified Data Analytics Platform, you can quickly prepare and clean data at mass{...}Unified Data Warehouse What is a Unified Data Warehouse?
A unified database also known as an enterprise data warehouse holds all the business information of an organization and makes it accessible all across the company. Most companies today, have their data managed in {...}What Is Hadoop? Apache Hadoop is an open source, Java-based software platform that manages data processing and storage for big data applications. The platform works by distributing Hadoop big data and analytics jobs across nodes in a computing cluster, breaking them{...}ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/jacob-renn
|
Jacob Renn - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJacob RennChief Technologist at AI Squared, IncBack to speakersDr. Jacob Renn is co-founder and Chief Technologist of AI Squared, a seed-stage startup located in the Washington, DC area. At AI Squared, Jacob leads the company’s R&D efforts. Jacob is the lead developer of DLite, a family of large language models developed by AI Squared, and he is also the creator of the BeyondML project. Jacob also serves as adjunct faculty at Capitol Technology University, where he completed his PhD in Technology with a focus in Explainable Artificial Intelligence.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/jaison-dominic
|
Jaison Dominic - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJaison DominicSenior Manager Information Systems at AmgenBack to speakersJaison Dominic is a Transformational Data & Analytics leader passionate about driving value for patients and businesses through Data Engineering, Advanced Analytics, and Enterprise Data Fabric. He enjoys driving discussions on Data Strategy alignment with business objectives, leading teams, and building effective partnerships within and outside the organization.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/company/newsroom/press-releases/databricks-announces-lakehouse-manufacturing-empowering-worlds
|
Databricks Announces Lakehouse for Manufacturing, Empowering the World’s Leading Manufacturers to Realize the Full Value of Their Data - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks Announces Lakehouse for Manufacturing, Empowering the World’s Leading Manufacturers to Realize the Full Value of Their DataApril 4, 2023Share this postLakehouse for Manufacturing offers pre-built solutions, partner-designed Brickbuilder offerings and integrated AI capabilities tailored to customers across the manufacturing, logistics, transportation, energy and utilities industries
SAN FRANCISCO – April 4, 2023 – Databricks, the lakehouse company, today announced the Databricks Lakehouse for Manufacturing, the first open, enterprise-scale lakehouse platform tailored to manufacturers that unifies data and AI and delivers record-breaking performance for any analytics use case. The sheer volume of tools, systems and architectures required to run a modern manufacturing environment makes secure data sharing and collaboration a challenge at scale, with over 70 percent of data projects stalling at the proof of concept (PoC) stage. Available today, Databricks’ Lakehouse for Manufacturing breaks down these silos and is uniquely designed for manufacturers to access all of their data and make decisions in real-time. Databricks’ Lakehouse for Manufacturing has been adopted by industry-leading organizations like DuPont, Honeywell, Rolls-Royce, Shell and Tata Steel.
Databricks’ newest industry-specific lakehouse goes beyond the limitations of traditional data warehouses by offering integrated AI capabilities and pre-built solutions that accelerate time to value for manufacturers and their partners. These include powerful solutions for predictive maintenance, digital twins, supply chain optimization, demand forecasting, real-time IoT analytics and more. A robust partner ecosystem and custom, partner-built Brickbuilder Solutions offer customers even greater choice in delivering real-time insights and impact across the entire value chain, and at a lower total cost of ownership (TCO) than complex legacy technologies
“We employed Databricks to optimize inventory planning using data and analytics, positioning parts where they need to be based on the insight we gain from our connected engines in real time and usage patterns we see in our service network,” said Stuart Hughes, Chief Information and Digital Officer at Rolls-Royce Civil Aerospace. “This has helped us minimize risks to engine availability, reduce lead times for spare parts and drive more efficiency in stock turns - all of this enables us to deliver TotalCare, the aviation industry’s leading Power-by-the-Hour (PBH) maintenance program.”
With Databricks, organizations can unlock the value of their existing investments and achieve AI at scale by unifying all of their data – regardless of type, source, frequency or workload – on a single platform. The Lakehouse for Manufacturing has robust data governance and sharing built-in, and enables organizations to deliver real-time insights for agile manufacturing and logistics, across their entire ecosystem.
Powerful industry solutions tailored for the lakehouse
The Lakehouse for Manufacturing includes access to packaged use case accelerators that are designed to jumpstart the analytics process and offer a blueprint to help organizations tackle critical, high-value industry challenges. Popular data solutions for Databricks’ Lakehouse for Manufacturing customers include:
Digital Twins: Created from data derived from sensors, digital twins enable engineers to monitor and model systems in real-time. With digital twins, manufacturers can process real-world data in real-time and deliver insights to multiple downstream applications, including process optimization modeling, risk assessments, condition monitoring, and optimized design.
Predictive Maintenance: By leveraging predictive maintenance, manufacturers can ingest real-time industrial Internet of Things (IIoT) data from field devices and perform complex time-series processing to maximize uptime and minimize maintenance costs.
Part-Level Forecasting: To avoid inventory stockouts, shorten lead times and maximize sales, manufacturers can perform demand forecasting at the part level rather than the aggregate level.
Overall Equipment Effectiveness: By incrementally ingesting and processing data from sensor/IoT devices in a variety of formats, organizations can provide a consistent approach to KPI reporting across a global manufacturing network.
Computer Vision: The development and implementation of computer vision applications enabled manufacturers to automate critical manufacturing processes, improving quality, reducing waste and rework costs, and optimizing flow.
“Shell has been undergoing a digital transformation as part of our ambition to deliver more and cleaner energy solutions. Databricks’ Lakehouse is central to the Shell.ai Platform and the ability to execute rapid queries on massive datasets,” said Dan Jeavons, VP Computational Science and Digital Innovation at Shell. “With the help of Databricks, Shell is better able to use its full historic data set to run 10,000+ inventory simulations across all its parts and facilities. Shell’s inventory prediction models now run in a few hours rather than days, significantly improving stocking practices and driving significant savings annually.”
Databricks Partners deliver an ecosystem of powerful, purpose-built solutions for manufacturers
Customers across the manufacturing industry also benefit from vetted data solutions from leading partners like Avanade, Celebal Technologies, DataSentics, Deloitte and Tredence, which are tailor-made to combine the power of Databricks’ Lakehouse Platform with proven industry expertise. Partner Brickbuilder Solutions and popular use cases for the Lakehouse for Manufacturing include:
Avanade Intelligent Manufacturing: Avanade enables manufacturers to harness all types of data, drive interoperability and realize more value throughout the manufacturing lifecycle with a comprehensive look at connected production facilities and assets.
Celebal Technologies Migrate to Databricks: A suite of proven tools from Celebal Technologies empowers organizations to easily migrate legacy on-premises/cloud environments to the Lakehouse Platform and addresses the key scalability, performance and cost challenges of legacy systems.
DataSentics Quality Inspector: With DataSentics, manufacturers can leverage computer vision to automate quality control and easily detect defects, foreign objects and anomalies throughout the manufacturing process, from classification and detection to product segmentation and tracking.
Deloitte Smart Factory: Deloitte offers automated Monthly Management Reporting to deliver dynamic insights and enable a digital organization supported by an enterprise data lake and advanced analytics.
Tredence Predictive Supply Risk Management: Tredence unifies siloed data and drives end-to-end visibility into order flows and supplier performance with a holistic view of the entire supply chain, coupled with real-time data to assess risk factors and prescriptive, AI-powered recommendations across all supply chain functions.
“Avanade is delighted to partner with industry innovators like Databricks. As the leading Microsoft Partner for Manufacturing, we see manufacturers getting smarter about how they use digital technologies – because they have to. Times are tough and innovations today must deliver more value more quickly across more of the organization than ever before. The potential of lakehouse is truly exciting and will play a significant part in our Industry X and Smart Digital Manufacturing services,” said Thomas Nall, Avanade Manufacturing Lead.
“Using the Lakehouse for Manufacturing, a business can utilize all data sources in their value chain so that the power of predictive AI and ML insights can be realized to identify inefficiencies in production processes, improve productivity, enhance quality control, and reduce supply chain costs. This data-driven manufacturing is where we see the industry going as companies seek to accelerate their Smart Factory transformations,” said Anthony Abbattista, Principal and Smart Factory Analytics Offering Leader at Deloitte Consulting LLP.
“With rising costs, plateauing industrial productivity, and talent gaps, manufacturing companies are facing unprecedented operational challenges. At the same time, autonomy, connectivity and electrification are shaping an entirely new approach of software-defined products that require a transformation of the business and operating model to be competitive and innovative. In the next 5 years, the companies that outperform in this industry will be the ones that not only manage data but effectively operationalize the value from data, analytics and AI at scale,” said Shiv Trisal, Global Industry Leader for Manufacturing at Databricks. “We are very excited to launch tailored accelerators that target the industry’s biggest pain points, and collaborate with leading partners to introduce Lakehouse for Manufacturing, enabling data teams to boost industrial productivity, gain nth-tier supply chain visibility and deliver smarter products and services at an accelerated pace.”
The introduction of the Lakehouse for Manufacturing comes on the heels of the recent release of Databricks Model Serving, for fully managed production ML and a new, native integration with VS Code. For more information, visit Databricks’ Lakehouse for Manufacturing homepage.
For those attending Hannover Messe, register to join Databricks on April 18th for An Evening at the Lakehouse at Insel Beach Club and learn more about Databricks’ platform and work with customers throughout the industry.
About Databricks
Databricks is the lakehouse company. More than 9,000 organizations worldwide — including Comcast, Condé Nast, and over 50% of the Fortune 500 — rely on the Databricks Lakehouse Platform to unify their data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe. Founded by the original creators of Apache Spark™, Delta Lake and MLflow, Databricks is on a mission to help data teams solve the world’s toughest problems. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Contact: [email protected]
Recent Press ReleasesMay 5, 2023Databricks plans to increase local headcount in India by more than 50% to support business growth and drive customer success; launching new R&D hub in 2023 Read nowApril 4, 2023Databricks Announces Lakehouse for Manufacturing, Empowering the World’s Leading Manufacturers to Realize the Full Value of Their DataRead nowMarch 30, 2023Databricks Announces EMEA Expansion, Databricks Infrastructure in the AWS France (Paris) RegionRead nowMarch 7, 2023Databricks Launches Simplified Real-Time Machine Learning for the LakehouseRead nowJanuary 17, 2023Databricks Strengthens Commitment in Korea, Appointing Jungwook Jang as Country ManagerRead nowView AllResourcesContactFor press inquires:[email protected]Stay connectedStay up to date and connect with us through our newsletter, social media channels and blog RSS feed.Subscribe to the newsletterGet assetsIf you would like to use Databricks materials, please contact [email protected] and provide the following information:Your name and titleCompany name and location Description of requestView brand guidelinesProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/p/webinar/tackle-data-transformation
|
Tackle Data Transformation With Delta Live Tables - DatabricksOn-DemandDelta Live Tables: Modern software engineering and management for ETLAvailable on-demandTo drive data analysis, data science and machine learning, data engineers have the difficult and laborious task of cleansing complex, diverse data and transforming it into a usable source. To do so, they need to know the ins and outs of the data infrastructure platform, and it requires the building of complex queries in various languages, stitching them together for production. For many organizations, this complexity limits their ability to utilize these critical downstream use cases.Watch this webinar to learn how Delta Live Tables simplifies the complexity of data transformation and ETL. Delta Live Tables (DLT) is the first ETL framework to use modern software engineering practices to deliver reliable and trusted data pipelines at any scale.In this webinar, you will learn how Delta Live Tables enables:Analysts and data engineers to innovate rapidly with simple pipeline development and maintenanceData teams to remove operational complexity by automating administrative tasks and gaining broader visibility into pipeline operationsTrust your data with built-in quality controls and quality monitoring to ensure accurate and useful BI, data science, and MLSimplified batch and streaming with self-optimization and auto-scaling data pipelinesSpeakersMichael ArmbrustDistinguished Software EngineerDatabricksAbhay PrajapatiPrincipal Data Solutions ArchitectJLLWatch nowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/code-of-conduct/#
|
Code of Conduct - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingCode of ConductData + AI Summit Event , Organized by DatabricksSummit Organizers are committed to creating a safe and inclusive experience for our conference attendees, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion. We do not tolerate harassment in any form. All communication — including at the event and various associated venues, as well as online — should be appropriate for a professional audience, including people of many different backgrounds and experiences. Be kind to others. Do not insult or put down other attendees. Act professionally. Remember that harassment and sexist, racist, or exclusionary jokes are not appropriate at any time at any event organized by Databricks.
Interaction Guidelines for AttendeesFor all interactions during the Data + AI Summit Event, we expect participants to abide by the event’s Code of Conduct to ensure the environment remains productive and respectful. The following guidelines will further help ensure we maintain an inclusive experience throughout our event:
Be respectful of others.
Avoid conflicts and arguments.
Use common sense, kindness, and consideration together with the guidelines outlined above.
If you intend to participate in the event, please make sure to dress appropriately.
EnforcementIf a participant engages in behavior that doesn’t comply with these expectations, Summit Organizers may take any action that we deem appropriate, including warning the participant, excluding the participant from certain activities, prohibiting the participant from attending future events organized by Databricks, expelling the participant from the event without a refund, banning the participant from online forums, and other similar type experiences. Participants asked to stop any harassing or other unacceptable behaviors are expected to comply immediately. Anyone violating these rules may be asked to leave the experience without a refund at the sole discretion of Summit Organizers.
Please note, while we take all concerns raised seriously, we will use our discretion to determine when and how to follow up on reported incidents, and may decline to take any further action and/or may direct the participant to other resources to address the concern.
Reporting an IssueIf you are being harassed, notice that someone else is, or have any concerns, please contact Summit Organizers at [email protected] and provide your name, phone number, email, and a description of the situation. Summit Organizers can only address complaints about behavior at the Event.
The reporting mechanisms under this Code of Conduct are not intended to address criminal activity or emergency situations. If you have been the victim of a crime or there is an emergency, please contact the appropriate municipal authorities, such as the police, fire, medical, or other emergency responders.
Thank you for helping make Databricks Data + AI Summit a welcoming, friendly place for all to share new ideas, learn, and connect.
Information for Presenters: Presenters who are unsure whether their presentations or other materials and communications are consistent with these expectations, please contact Toby Malina at [email protected] in advance of the experience.
HomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/zachary-huang/#
|
Zachary Huang - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingZachary HuangPhD at Columbia UniversityBack to speakersZachary Huang is a PhD student at Columbia University. His research interests are in novel data management systems over large join graphs. His research works have been applied to data exploration, machine learning, and data market.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/zoe-durand/#
|
Zoe Durand - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingZoe DurandSenior Product Manager at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/rob-saker/#
|
Rob Saker - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRob SakerGlobal VP, Retail and Manufacturing at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/session_na20/automating-federal-aviation-administrations-faa-system-wide-information-management-swim-data-ingestion-and-analysis
|
AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new featuresNFL Sunday Ticket© 2023 Google LLCDatabricks - YouTube
|
https://www.databricks.com/dataaisummit/speaker/steven-yu
|
Steven Yu - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSteven YuPrincipal Solutions Architect at DatabricksBack to speakersTBDLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/glossary/lakehouse-for-retail
|
What is Lakehouse for Retail?PlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLakehouse for RetailAll>Lakehouse for RetailTry Databricks for freeGet StartedWhat is Lakehouse for Retail?Lakehouse for Retail is Databricks’ first industry-specific Lakehouse. It helps retailers get up and running quickly through solution accelerators, data sharing capabilities, and a partner ecosystem.Lakehouse for Retail is the culmination of technologies, partners, tools, and industry initiatives to drive stronger collaboration around data + AI. It is made up of four fundamental building blocks:
Unified Data + AI Platform: Delivers critical capabilities required to run modern retail on a single platform:
Make data-informed decisions in real-timeDeliver more accurate analysis with powerful AILeverage a variety of data types on one unified platformPartner Solutions: Customers can confidently adopt Lakehouse for Retail by adopting partner-built use case solutions created on Lakehouse for Retail.Driving Industry Data Sharing & Collaboration: Drive secure and open data sharing as well as collaboration to unlock innovation with partners.Tools to Accelerate: Databricks and its partners have invested in a variety of retail solution accelerators to help customers quickly adopt and drive results with Lakehouse.Why are retailers turning to Lakehouse for Retail?Over the last two years, retailers have struggled with the massive shift to e-commerce and with shifting consumers' preferences and needs. To address these challenges, retailers have built the following solutions into their strategy:Operate in real-time: Companies must be able to rapidly ingest data at scale and make insights available across the value chain in real-time.Eliminate technical roadblocks: Allows organizations to perform fine-grained analysis for all products within their tight service level agreements.Use multimodal data quickly and efficiently for analysis: Only 5-10% of a company’s data is structured. Tapping into the other 90% of data helps businesses better understand the environment around them and make better decisions.Gain an inexpensive and open method of collaboration: For data and analysis, collaboration is critical to ensure open interaction and innovation for all partners in the value chain. Enhanced collaboration also improves the speed of operations, builds richer analytics, and reduces the cost of alignment across the value chain.What are the benefits to retailers?Immediate benefits: Real-time access to data. Moving from batch-oriented access, analysis, and compute will allow data to be “always-on”, which drives real-time decisions and business intelligence. Real-time use cases, such as demand forecasting, personalization, on-shelf availability, arrival time prediction, and order picking & consolidation, provide value to the organization through improved supply chain agility, reduced cost to serve, optimized product availability, and stocking replenishment.Long-term benefits:Retail has always been about collaboration - but most of that collaboration didn’t involve data sharing or data collaboration. Lakehouse for Retail is changing this. Data sharing between retailers, suppliers, agencies, and distributors, in an open and secure environment, opens up additional fine-grained use cases, therefore promoting additional consumption.Learn more about Lakehouse for Retail solutions.Additional ResourcesIntroduction to Lakehouse for Retail VideoCollaborating Across the Retail Value Chain with Data and AIThe Retail Lakehouse: Build Resiliency and Agility in the Age of DisruptionReimagining the future of retail with data + AI blogBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/justin-debrabant/#
|
Justin DeBrabant - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJustin DeBrabantSVP of Product at ActionIQBack to speakersJustin is Senior Vice President of Product at ActionIQ. He spent his formative years building large distributed systems to support data science and analytics and holds a Ph.D. in Databases from Brown University where he researched the forefront of modern data systems. For the last 10+ years, he has been passionate about building data-driven products that help realize the value of customer data by delivering truly customer-centric experiences.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/solutions/industries/technology-and-software
|
Data Analytics, Machine Learning and AI for the Technology Industry | Databricks – DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWData Analytics and Machine Learning for the Technology and Software IndustryBring new technologies to market faster with big data analytics and machine learning.Develop Next Generation Technology Solutions with Data Analytics and AIEnabling technology and software companies to tap into the potential of data and machine learning to develop new technologies and applications that customers crave — powered by the Databricks Unified Data Analytics Platform.Product development and supportAnalyze customer and market data to identify new product features and predict customer support needs for early intervention and remediation.Security MonitoringAnalyze product and network data in real-time to detect anomalies and respond to threats before they impact application and system performance.Predictive maintenanceFrom the factory floor to products in the field, analyze streaming IoT data to predict maintenance needs before they occur.Customer TalkAutomating Support Ticket Forecasting with Databricks, Delta Lake and MLflowLearn how Atlassian built a robust, fault-tolerant, auditable, and reproducible ML pipeline for predicting support tickets at a granular level.Watch nowCustomer TalkHow Salesforce Uses Apache Spark and Databricks to Power Intelligent ServicesWatch this Spark + AI Summit talk to learn how Salesforce uses Apache Spark and Databricks to discover new insights, power smarter decision making, and automate development workflows.Watch nowCustomer KeynoteThreat Detection at Petabyte-scaleWatch this Spark + AI Summit keynote to learn how one of the largest tech companies in the world uses Databricks to monitor cyber threats in real-time.Watch nowCase StudyUsing AI to Uncover Revenue OpportunitiesRead how People.ai uses machine learning to help enterprise customers drive actionable insights that uncover revenue opportunities.Read more“Databricks Unified Analytics Platform has helped foster collaboration across our data science and engineering teams which has impacted innovation and productivity.” —John Landry, Distinguished Technologist, HP, Inc.Read the case studyReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/it/customers
|
Clienti di Databricks | DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT
Addio, Data Warehouse. Ciao, Lakehouse.
Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno.
Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT
Scopri la Lakehouse for Manufacturing
Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons
26–29 giugno 2023
Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWClienti di DatabricksScopri come aziende innovative in tutti i settori utilizzano Databricks Lakehouse Platform per operare con successoStorie in evidenza
Customer Story
AT&T democratizes data to prevent fraud, reduce churn and increase CLV
Databricks Lakehouse has helped AT&T accelerate AI across operations, including decreasing fraud by 70%–80%
Read more
REFERENZA
Shell innova con soluzioni energetiche per un mondo più pulito
Databricks Lakehouse aiuta a democratizzare i dati e modernizzare le operazioni a livello globale
Per saperne di più
Customer Story
Burberry adopts data-driven content creation
Burberry sees a 70% improvement in time savings for generating image insights with Databricks Lakehouse
Read more
Customer Story
ABN AMRO transforms banking on a global scale
ABN AMRO puts data and Al into action with Databricks Lakehouse
Read more
Customer Story
Rolls-Royce delivers a greener future for air travel
Rolls-Royce decreases carbon through real-time data collection with Databricks Lakehouse
Watch video
Customer Story
SEGA drives the future of gaming with data and Al
SEGA uses Databricks Lakehouse to democratize data and deliver gaming experiences at scale
Watch videoScopri tutti i clientiL'effetto team di datiI team di gestione dei dati sono la forza coesa che risolve i problemi più complessi del mondo.Scopri come →La tua può essere la prossima storia di successoContattiRisorseREFERENZAScopri come Databricks aiuta Condé Nast a proporre contenuti personalizzati ai propri clienti.Maggiori informazioniWebinarScopri come Apple e Disney+ hanno unificato con successo analisi e AIGuardaPodcastIl ruolo dei dati e dell'IA nell'equità dell'assistenza sanitaria è stato illustrato dal CDAO di HumanaGuardaPronto per cominciare?Prova gratuitaContattiProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte
in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California
|
https://www.databricks.com/dataaisummit/speaker/julie-ferris
|
Julie Ferris - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJulie FerrisVice President, Commercial Optimization at Definitive HealthcareBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/chris-mantz/#
|
Chris Mantz - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingChris MantzData Architect at SlalomBack to speakersA consultant, as well as a data architect and data engineer by training, Chris has 4 years of experience architecting Databricks solutions across a variety of industries including healthcare, transportation, and retail.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/company/partners/technology
|
Databricks Technology Partners - DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks Technology PartnersConnect with Databricks Technology Partners to integrate data ingestion, business intelligence and governance capabilities with the Databricks Lakehouse Platform.“With Databricks and Fivetran, we will be able to significantly improve marketing insights in the future. From a technical standpoint, the two tools interact harmoniously together and the integration feels very native.”— Jan-Niklas Mühlenbrock, Team Lead, Business Intelligence & ERP at Paul HewittDatabricks Technology Partners integrate their solutions with Databricks to provide complementary capabilities for ETL, data ingestion, business intelligence, machine learning and governance.These integrations enable customers to leverage the Databricks Lakehouse Platform’s reliability and scalability to innovate faster while deriving valuable data insights.Partner ConnectBring together all your data, analytics and AI tools on one open platform. With Partner Connect, Databricks provides a fast and easy way to connect your existing tools to your lakehouse using validated integrations, and helps you discover and try new solutions.Become a partnerLearn moreLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/br/product/machine-learning
|
Databricks Machine Learning | DatabricksSkip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWMachine LearningAcelere seus projetos de IA com uma abordagem baseada em dados para machine learningExperimente gratuitamenteAgendar uma demonstração
Crie seu próprio modelo de linguagem grande como Dolly
25 de abril de 2023 | 8h PT
ParticipeMergulhe no machine learning com a DatabricksMLflow gerenciadoRegistro de modelosRuntime para MLNotebooks colaborativosFeature StoreAutoMLIA explicávelRepositóriosMonitoramento de modelosServindo modeloCom base em uma arquitetura aberta de lakehouse, o Databricks Machine Learning permite que as equipes de ML preparem e processem dados, simplifiquem a colaboração em toda a empresa e padronizem todo o ciclo de vida, dos testes à produção.Mais de US$ 6 milhões em economiaCONA Services usa Databricks para um ciclo de vida completo de ML com o intuito de otimizar a cadeia de suprimentos de centenas de milhares de lojas.Saiba maisR$ 3,9 milhões em economiaVia aproveita o machine learning para prever a demanda com precisão, reduzindo os custos de computação em 25%.Saiba maisMais de US$ 50 milhões em redução de custosAmgen está melhorando a colaboração de data science para acelerar a descoberta de medicamentos e reduzir os custos operacionais.Saiba maisSimplifique todos os aspectos de dados para MLDesenvolvido em uma base aberta de lakehouse graças ao Delta Lake, o Databricks ML permite que suas equipes de machine learning acessem, explorem e preparem qualquer tipo de dados em qualquer escala. Transforme recursos em pipelines de produção de autoatendimento sem depender do suporte de data engineering.Automatize o rastreamento e a governança de experimentosO MLflow Gerenciado rastreia automaticamente seus parâmetros de experimentos e logs, métricas, controles de versão de dados e código, bem como artefatos de modelo a cada execução de treinamento. Você pode ver rapidamente execuções anteriores, comparar resultados e reproduzir um resultado de execução anterior, conforme necessário. Depois de identificar a melhor versão de um modelo para produção, registre-a no Registro de Modelos para simplificar as transferências ao longo do ciclo de implantação.Gerencie todo o ciclo de vida do modelo, dos dados à produção — e vice-versaDepois que os modelos treinados forem registrados, você poderá gerenciá-los de forma colaborativa durante todo o ciclo de vida usando o Registro de Modelos. Os modelos podem ser versionados e passar por diferentes etapas, como teste, preparação, produção e arquivamento. A gestão do ciclo de vida se integra aos fluxos de trabalho de aprovação e governança de acordo com os controles de acesso baseados em funções. Comentários e notificações por e-mail fornecem um ambiente colaborativo rico para equipes de dados.Implante modelos de ML de baixa latência e grande escalaImplante modelos com um único clique, sem se preocupar com gerenciamento de servidor ou restrições de escala. Com a Databricks, você pode implantar seus modelos como endpoints API REST em qualquer lugar com disponibilidade de nível empresarial.Componentes do produtoNotebooks colaborativosOs notebooks da Databricks são compatíveis nativamente com Python, R, SQL e Scala. Os usuários podem trabalhar com as linguagens e bibliotecas de sua escolha para descobrir, visualizar e compartilhar insights.Saiba maisRuntime para machine learningAcesso com um clique a clusters de ML pré-configurados e otimizados com base em uma distribuição escalável e confiável das estruturas de ML mais populares (como PyTorch, TensorFlow e scikit-learn), com otimizações integradas para desempenho inigualável em toda a empresa.Saiba maisFeature StoreFacilite a reutilização de recursos com busca baseada em linhagem de dados que aproveita fontes registradas automaticamente. Disponibilize seus recursos para treinamento com implantação de modelos simplificados que não requerem alterações na aplicação cliente.Saiba maisAutoMLEmpodere todos, de especialistas em ML a citizen data scientists, com uma abordagem “caixa de vidro” para AutoML, que não apenas fornece o modelo de melhor desempenho, mas também gera código para refinamento adicional por especialistas.Saiba maisMLflow gerenciadoDesenvolvido em MLflow — a principal plataforma de código aberto do mundo para o ciclo de vida de ML — o MLflow Gerenciado ajuda os modelos de ML a passar rapidamente dos testes para a produção, com um alto nível de segurança, confiabilidade e escalabilidade em toda a empresa.Saiba maisDisponibilização de modelos de nível de produçãoDisponibilize modelos em qualquer escala com um clique, com a opção de computação serverless.Saiba maisMonitoramento de modelosMonitore o desempenho do modelo e seu impacto nas métricas de negócios em tempo real. A Databricks oferece visibilidade e linhagem de ponta a ponta, desde modelos de produção até sistemas de dados de origem. A plataforma permite analisar o modelo e a qualidade dos dados durante todo o ciclo de vida do ML e, portanto, identifica problemas antes que tenham impacto prejudicial.Saiba maisRepositóriosOs repositórios permitem que os engenheiros rastreiem os fluxos de trabalho do Git na Databricks. Assim, as equipes de dados podem aproveitar fluxos de trabalho de CI/CD automatizados e portabilidade de código.Saiba maisMigre para a DatabricksNão aguenta mais silos de dados, desempenho lento e altos custos associados a sistemas obsoletos, como Hadoop e os data warehouses corporativos? Migre para a Databricks Lakehouse: a plataforma moderna para todos os seus casos de uso de dados, análises e IA.Migre para a DatabricksRecursos
Todos os recursos de que você precisa. Reunidos em um só lugar.
Explore nossa biblioteca de recursos: você encontrará e-books e vídeos sobre data science e machine learning.
Explorar recursose-bookse-book: The Big Book of MLOpsExplore a nova solução de Delta SharingGuia de migração: Do Hadoop para DatabricksDemonstrações e blogsMergulhe nas soluções de machine learning com a DatabricksCinco etapas principais para uma migração bem-sucedida do Hadoop para a arquitetura de lakehouseEventos virtuaisAutoML: Machine learning rápido e simplificado para todosEvento virtual do MLOps: Padronização do MLOps em escalaAutomatização do ciclo de vida de ML com Machine Learning DatabricksEvento virtual MLOps: Operacionalização do machine learning em grande escalaCriação de plataformas de machine learningDelta Lake: A base do seu lakehouseGuia passo a passo para migração do HadoopTudo pronto para começar?Experimente o Databricks gratuitamenteProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia
|
https://www.databricks.com/try-databricks-azure?itm_data=AzurePage-HeroTrialButton-AzureTrial
|
Try Databricks on Azure – DatabricksTry Databricks on AzureDiscover how easy it is to simplify your data architecture by unifying all of your analytics and AI workloads on a simple open lakehouse platform.Azure Databricks is optimized for Azure and tightly integrated with Azure Data Lake Storage, Azure Data Factory, Power BI and many more Azure services.Join thousands of customers who already use this game-changing platform for their data teams.Create your Databricks account1/2First nameLast NameEmailCompanyTitlePhone (Optional)SelectCountryContinuePrivacy Notice (Updated)Terms of UseYour Privacy ChoicesYour California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/neil-patel
|
Neil Patel - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingNeil PatelLead Specialist Solutions Architect at DatabricksBack to speakersNeil Patel is an SSA at Databricks. He has worked on a variety of use cases and problems across different customers.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/test1-testcfp1102/#
|
Test1 TestCFP1102 - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingTest1 TestCFP1102 testBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/vincent-chen
|
Vincent Chen - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingVincent ChenDirect of Product / Founding Engineer at Snorkel AIBack to speakersVincent Chen leads product for machine learning experiences in Snorkel Flow. Before that, he led Snorkel's ML Engineering team and performed research at the Stanford AI Lab, where he worked on the foundations of data-centric machine learning systems.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/rahil-bhatnagar
|
Rahil Bhatnagar - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRahil BhatnagarDevelopment Lead, LOLA at Anheuser BuschBack to speakersRahil Bhatnagar has experience leading cross-functional teams to build scalable products, taking them from ideas to production. Applying his distributed systems and game development background to create sustainable dynamic solutions on time. Currently, leading and scaling Anheuser-Busch's Machine Learning Platform, LOLA to solve the growing demand for machine learning insights in a tech-first FMCPG.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/jp/legal/privacynotice
|
Privacy Notice | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesPrivacy NoticeThis Privacy Notice explains how Databricks, Inc. and its affiliates ( “Databricks”, “we”, “our”, and “us”) collects, uses, shares and otherwise processes your personal information (also known as personal data) in connection with the use of Databricks websites and applications that link to this Privacy Notice (the “Sites”), our data processing platform products and services (the “Platform Services”) and in the usual course of business, such as in connection with our events, sales, and marketing activities (collectively, “Databricks Services”). It also contains information about your choices and privacy rights.Our ServicesWe provide the Platform Services to our customers and users (collectively, “Customers”) under an agreement with them and solely for their benefit and the benefit of personnel authorized to use the Platform Services (“Authorized Users”). Our processing of such data is governed by our agreement with the relevant Customer. This Privacy Notice does not apply to (i) the data that our Customers upload, submit or otherwise make available to the Platform Services and other data that we process on their behalf, as defined in our agreement with the Customer; (ii) any products, services, websites, or content that are offered by third parties or that have their own privacy notice; or (iii) personal information that we collect and process in connection with our recruitment activities, which is covered under our Applicant Privacy Notice.We recommend that you read this Privacy Notice in full to ensure that you are informed. However, if you only want to access a particular section of this Privacy Notice, you can click on the link below to go to that section.Information We Collect About YouHow We Use Your InformationHow We Share Your InformationInternational TransfersYour Choices and RightsAdditional Information for Certain JurisdictionsOther Important InformationChanges to this NoticeHow to Contact UsInformation We Collect About YouInformation that we collect from or about you includes information you provide, information we collect automatically, and information we receive from other sources.Information you provideWhen using our Databricks Services, we may collect certain information, such as your name, email address, phone number, postal address, job title, and company name. We may also collect other information that you provide through your interactions with us, for example if you request information about our Platform Services, interact with our sales team or contact customer support, complete a survey, provide feedback or post comments, register for an event, or take part in marketing activities. We may keep a record of your communications with us and other information you share during the course of the communications.When you create an account, for example, through our Sites or register to use our Platform Services, we may collect your personal information, such as your name and contact information. We may also collect credit card information if chosen by you as a payment method, which may be shared with our third party service providers, including for payment and billing purposes. Information we collect automatically We use standard automated data collection tools, such as cookies, web beacons, tracking pixels, tags, and similar tools, to collect information about how people use our Sites and interact with our emails.For example, when you visit our Sites we (or an authorized third party) may collect certain information from you or your device. This may include information about your computer or device (such as operating system, device identifier, browser language, and Internet Protocol (IP) address), and information about your activities on our Sites (such as how you came to our Sites, access times, the links you click on, and other statistical information). For example, your IP address may be used to derive general location information. We use this information to help us understand how you are using our Sites and how to better provide the Sites to you. We may also use web beacons and pixels in our emails. For example, we may place a pixel in our emails that notifies us when you click on a link in the email. We use these technologies to improve our communications. The types of data collection tools we use may change over time as technology evolves. You can learn more about our use of cookies and similar tools, as well as how to opt out of certain data collection, by visiting our Cookie Notice. When you use our Platform Services, we automatically collect information about how you are using the Platform Services (“Usage Data”). While most Usage Data is not personal information, it may include information about your account (such as User ID, email address, or Internet Protocol (IP) address) and information about your computer or device (such as browser type and operating system). It may also include information about your activities within the Platform Services, such as the pages or features you access or use, the time spent on those pages or features, search terms entered, commands executed, information about the types and size of files analyzed via the Platform Services, and other statistical information relating to your use of the Platform Services. We collect Usage Data to provide, support and operate the Platform Services, for network and information security, and to better understand how our Authorized Users and Customers are using the Platform Services to improve our products and services. We may also use the information we collect automatically (for example, IP address, and unique device identifiers) to identify the same unique person across Databricks Services to provide a more seamless and personalized experience to you. Information we receive from other sourcesWe may obtain information about you from third party sources, including resellers, distributors, business partners, event sponsors, security and fraud detection services, social media platforms, and publicly available sources. Examples of information that we receive from third parties include marketing and sales information (such as name, email address, phone number and similar contact information), and purchase, support and other information about your interactions with our Sites and Platform Services. We may combine such information with the information we receive and collect from you.How We Use Your InformationWe use your personal information to provide, maintain, improve and update our Databricks Services. Our purposes for collecting your personal information include:to provide, maintain, deliver and update the Databricks Services;to create and maintain your Databricks account;to measure your use and improve Databricks Services, and to develop new products and services;for billing, payment, or account management; for example, to identify your account and correctly identify your usage of our products and services;to provide you with customer service and support;to register and provide you with training and certification programs;to investigate security issues, prevent fraud, or combat the illegal or controlled uses of our products and services;for sales phone calls for training and coaching purposes, quality assurance and administration (in accordance with applicable laws), including to analyze sales calls using analytics tools to gain better insights into our interactions with customers; to send you notifications about the Databricks Services, including technical notices, updates, security alerts, administrative messages and invoices;to respond to your questions, comments, and requests, including to keep in contact with you regarding the products and services you use;to tailor and send you newsletters, emails and other content to promote our products and services (you can always unsubscribe from our marketing emails by clicking here) and to allow third party partners (like our event sponsors) to send you marketing communications about their services, in accordance with your preferences;to personalize your experience when using our Sites and Platform Services;for advertising purposes; for example, to display and measure advertising on third party websites;to contact you to conduct surveys and for market research purposes;to generate and analyze statistical information about how our Sites and Platform Services are used in the aggregate;for other legitimate interests or lawful business purposes; for example, customer surveys, collecting feedback, and conducting audits;to comply with our obligations under applicable law, legal process, or government regulation; andfor other purposes, where you have given consent.How We Share Your InformationWe may share your personal information with third parties as follows:with our affiliates and subsidiaries for the purposes described in this Privacy Notice;with our service providers who assist us in providing the Databricks Services, such as billing, payment card processing, customer support, sales and marketing, and data analysis, subject to confidentiality obligations and the requirement that those service providers do not sell your personal information;with our service providers who assist us with detecting and preventing fraud, security threats or other illegal or malicious behavior, for example Sift who provides fraud detection services where your personal information is processed by Sift in accordance with its Privacy Notice available at https://sift.com/service-privacy;with third party business partners, such as resellers, distributors, and/or referral partners, who are involved in providing content, products or services to our prospects or Customers. We may also engage with third party partners who are working with us to organize or sponsor an event to which you have registered to enable them to contact you about the event or their services (but only where we have a lawful basis to do so, such as your consent where required by applicable law);with marketing partners, such as advertising providers that tailor online ads to your interests based on information they collect about your online activity (known as interest-based advertising);with the organization that is sponsoring your training or certification program, for example to notify them of your registration and completion of the course;when authorized by law or we deem necessary to comply with a legal process;when required to protect and defend the rights or property of Databricks or our Customers, including the security of our Sites, products and services (including the Platform Services);when necessary to protect the personal safety, property or other rights of the public, Databricks or our Customers;where it has been de-identified, including through aggregation or anonymization;when you instruct us to do so;where you have consented to the sharing of your information with third parties; orin connection with a merger, sale, financing or reorganization of all or part of our business.International TransfersDatabricks may transfer your personal information to countries other than your country of residence. In particular, we may transfer your personal information to the United States and other countries where our affiliates, business partners and services providers are located. These countries may not have equivalent data protection laws to the country where you reside. Wherever we process your personal information, we take appropriate steps to ensure it is protected in accordance with this Privacy Notice and applicable data protection laws. These safeguards include implementing the European Commission’s Standard Contractual Clauses for transfers of personal information from the EEA or Switzerland between us and our business partners and service providers, and equivalent measures for transfers of personal information from the United Kingdom. Databricks also offers our Customers the ability to enter into a data processing addendum (DPA) that contains the Standard Contractual Clauses, for transfers between us and our Customers. We also make use of supplementary measures to ensure your information is adequately protected. Privacy Shield NoticeDatabricks adheres to the principles of the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks, although Databricks no longer relies on the EU-U.S. or Swiss-U.S. Privacy Shield Frameworks as a legal basis for transfers of personal information in light of the judgment of the Court of Justice of the European Union in Case C-311/18. To learn more, visit our Privacy Shield Notice.Your Choices and RightsWe offer you choices regarding the collection, use and sharing of your personal information and we will respect the choices you make in accordance with applicable law. Please note that if you decide not to provide us with certain personal information, you may not be able to access certain features of the Sites or use the Platform Services.Account informationIf you want to correct, update or delete your account information, please log on to your Databricks account and update your profile.Opt out of marketingWe may periodically send you marketing communications that promote our products and services consistent with your choices. You may opt out from receiving such communications, either by following the unsubscribe instructions in the communication you receive or by clicking here. Please note that we may still send you important service-related communications regarding our products or services, such as communications about your subscription or account, service announcements or security information.Your privacy rightsDepending upon your place of residence, you may have rights in relation to your personal information. Please review the jurisdiction specific sections below, including the disclosures for California residents. Depending on applicable data protection laws, those rights may include asking us to provide certain information about our collection and processing of your personal information, or requesting access, correction or deletion of your personal information. You also have the right to withdraw your consent, to the extent we rely on consent to process your personal information. If you wish to exercise any of your rights under applicable data protection laws, submit a request online by completing the request form here or emailing us at [email protected]. We will respond to requests that we receive in accordance with applicable laws. Databricks may take certain steps to verify your request using information available to us, such as your email address or other information associated with your Databricks account, and if needed we may ask you to provide additional information for the purposes of verifying your request. Any information you provide to us for verification purposes will only be used to process and maintain a record of your request.As described above, we may also process personal information that has been submitted by a Customer to our Platform Services. If your personal information has been submitted to the Platform Services by or on behalf of a Databricks Customer and you wish to exercise your privacy rights, please direct your request to the relevant Customer. For other inquiries, please contact us at [email protected].Additional Information for Certain JurisdictionsThis section provides additional information about our privacy practices for certain jurisdictions.CaliforniaIf you are a California resident, the California Consumer Privacy Act (“CCPA”) requires us to provide you with additional information regarding your rights with respect to your “personal information. This information is described in our Supplemental Privacy Notice to California Residents. Other US StatesDepending on applicable laws in your state of residence, you may request to: (1) confirm whether or not we process your personal information; (2) access, correct, or delete personal information we maintain about you; (3) receive a portable copy of such personal information; and/or (4) restrict or opt out of certain processing of your personal information, such as targeted advertising, or profiling in furtherance of decisions that produce legal or similarly significant effects. If we refuse to take action on a request, we will provide instructions on how you may appeal the decision. We will respond to requests consistent with applicable law.European Economic Area, UK and SwitzerlandIf you are located in the European Economic Area, United Kingdom or Switzerland, the controller of your personal information is Databricks, Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105, United States. We only collect your personal information if we have a legal basis for doing so. The legal basis that we rely on depends on the personal information concerned and the specific context in which we collect it. Generally, we collect and process your personal information where:We need it to enter into or perform a contract with you, such as to provide you with the Platform Services, respond to your request, or provide you with customer support;We need to process your personal information to comply with a legal obligation (such as to comply with applicable legal, tax and accounting requirements) or to protect the vital interests of you or other individuals;You give us your consent, such as to receive certain marketing communications; orWhere we have a legitimate interest, such as to respond to your requests and inquiries, to ensure the security of the Sites and Platform Services, to detect and prevent fraud, to maintain, customize and improve the Sites and Platform Services, to promote Databricks and our Platform Services, and to defend our interests and rights.If you have consented to our use of your personal information for a specific purpose, you have the right to change your mind at any time but this will not affect our processing of your information that has already taken place. You also have the following rights with respect to your personal information:The right to access, correct, update, or request deletion of your personal information;The right to object to the processing of your personal information or ask that we restrict the processing of your personal information;The right to request portability of your personal information;The right to withdraw your personal information at any time, if we collected and processed your personal information with your consent; andThe right to lodge a complaint with your national data protection authority or equivalent regulatory body.If you wish to exercise any of your rights under data protection laws, please contact us as described under “Your Choices and Rights”.Other Important InformationNotice to Authorized UsersOur Platform Services are intended to be used by organizations. Where the Platform Services are made available to you through an organization (e.g., your employer), that organization is the administrator of the Platform Services and responsible for the accounts and/or services over which it has control. For example, administrators can access and change information in your account or restrict and terminate your access to the Platform Services. We are not responsible for the privacy or security practices of an administrator's organization, which may be different from this Privacy Notice. Please contact your organization or refer to your organization's policies for more information.Data RetentionDatabricks retains the personal information described in this Privacy Notice for as long as you use our Databricks Services, as may be required by law (for example, to comply with applicable legal tax or accounting requirements), as necessary for other legitimate business or commercial purposes described in this Privacy Notice (for example, to resolve disputes or enforce our agreements), or as otherwise communicated to you.SecurityWe are committed to protecting your information. We use a variety of technical, physical, and organizational security measures designed to protect against unauthorized access, alteration, disclosure, or destruction of information. However, no security measures are perfect or impenetrable. As such, we cannot guarantee the security of your information.Third Party ServicesOur Databricks Services may contain links to third party websites, applications, services, or social networks (including co-branded websites or products that are maintained by one of our business partners). We may also make available certain features that allow you to sign into our Sites using third party login credentials (such as LinkedIn, Facebook, Twitter and Google+) or access third party services from our Platform Services (such as Github). Any information that you choose to submit to third party services is not covered by this Privacy Notice. We encourage you to read the terms of use and privacy notices of use of such third party services before sharing your information with them to understand how your information may be collected and used.Children's DataThe Sites and Platform Services are not directed to children under 18 years of age and Databricks does not knowingly collect personal information from children under 18. If we learn that we have collected any personal information from children under 18, we will promptly take steps to delete such information. If you are aware that a child has submitted us such information, please contact us using the details provided below.Changes to this NoticeDatabricks may change this Privacy Notice from time to time. We will post any changes on this page and, if we make material changes, provide a more prominent notice (for example, by adding a statement to the website landing page, providing notice through the Platform Services login screen, or by emailing you). You can see the date on which the latest version of this Privacy Notice was posted below. If you disagree with any changes to this Privacy Notice, you should stop using the Databricks Services and deactivate your Databricks account. How to Contact UsPlease contact us at [email protected] if you have any questions about our privacy practices or this Privacy Notice. You can also write to us at Databricks Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105 Attn: Privacy.If you interact with Databricks through or on behalf of your organization, then your personal information may also be subject to your organization’s privacy practices and you should direct any questions to that organization.Last updated: January 3, 2023ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/kr/professional-services
|
Databricks 프로페셔널 서비스 | DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센 터웨비나 5월 18일 / 오전 8시(태평양 표준시)
안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다.
데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오.
지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)
제조업을 위한 레이크하우스 살펴보기
코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일
직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOWDatabricks 프로페셔널 서비스자세한 내용은 문의하세요세계적 수준의 데이터 엔지니어링, 데이터 사이언스, 프로젝트 관리 전문 지식을 통해 프로젝트를 신속하게 성공시키도록 지원 Databricks 프로페셔널 서비스는 데이터 및 AI 여정에서 언제든 도움을 드릴 수 있습니다.장점데이터 및 AI 여정 가속화
Databricks의 상품과 전문가 서비스는 데이터 및 AI 여정에서 최초 워크스페이스 온보딩부터 기업 규모의 DataOps
및 혁신 센터 사례를 구축하기에 이르기까지 최적의 맞춤형 가속화를 고객에게 제공합니다.
프로젝트 위험 완화
기존 워크로드에서 Databricks로 이동하거나, 새로운 데이터 제품이나 데이터 및 AI 파이프라인, 머신 러닝 프로젝트를 구축할 때 Databricks가 파트너이자 신뢰할 수 있는 자문이 되어 여정의 모든 단계에서 위험을 최소화하고 가치를 극대화하도록 도와드리겠습니다.
대규모 운영화
데이터 파이프라인 POC를 구축하거나 단일 노드 모델을 개발하는 작업은 비교적 관리가 어렵지 않습니다. 문제는 조직 전체에 데이터와 AI 사례를 성공적으로 도입하고 확장하는 방법에 있습니다. Databricks에서는 규범적 서비스와 전문 지식으로 이와 같은 까다로운 목표를 달성하도록 도와드립니다.
서비스세계적 수준의 데이터 엔지니어링, 데이터 사이언스, 프로젝트 관리 전문 지식을 통해 프로젝트를 신속하게 성공시키도록 지원빠르게 시작하기Databricks 플랫폼과 모범 사례를 따르는 주요 기능을 익혀 프로젝트 기간을 단축하세요.하둡 마이그레이션Databricks의 규범적 접근법은 기존 데이터와 파이프라인 투자의 가치를 극대화하고 매끄러운 마이그레이션을 제공합니다.레이크하우스 구축데이터 분석, 데이터 사이언스, ML을 위한 단순화된 통합 플랫폼을 빠르게 구현하여 레이크하우스 비전을 위한 기반을 마련하세요.머신 러닝Databricks의 규범적 방법론으로 엔터프라이즈 ML 이니셔티브와 도입을 강화하세요.공유된 서비스 액셀러레이터Databricks의 규범적 방법론으로 엔터프라이즈 규모 운영 모델을 가속화하고 데이터 및 AI를 탁월하게 활용하세요.맞춤형 서비스고유한 요구 사항에 맞는 맞춤형 업무 기술서에 대해 문의하세요.Databricks의 프로페셔널 서비스 전문가들은 복잡하고 특징적이면서도 표적화된 요구 사항에 맞춰 수명 주기 전체를 아우르는 프로젝트를 성공시킨 경험이 있습니다.상주 솔루션 아키텍트 | 데이터 사이언티스트탁월한 리더십과 컨설팅 기술을 갖춘, 매우 경험이 풍부하고 기술적인 리소스. Databricks는 고객의 성공을 위해 광범위한 자사 파트너 에코시스템을 활용하여 필요에 따라 함께 서비스를 제공합니다.Databricks 및 Spark 전문가10년 간 축적된 경험
빅데이터에 대한 풍부한 지식
실제 구현 기술
프로젝트 계획 및 실행설계 및 아키텍처 지원
프로젝트를 플랫폼 기능에 맞도록 지원
프로젝트 일정과 리소스 요구 사항에 대한 조언 제공
구현 및 프로덕션 계획확장성을 위한 아키텍트 솔루션
프로토타입 개발 지원
DevOps 통합 요구 사항 해결
COE 구현 지원공통적 표준 및 프레임워크 개발
여러 팀을 위한 리소스로 제공
다른 Databricks 팀과의 상호작용 지원
리소스성공 크레딧 프로그램 개요교환 요청 양식Databricks Academy를 선택하는 이유고객 성공의 중심에 선 사람들은 Databricks Academy를 통한 교육과 인증을 받으면 UC Berkeley에서 Spark 연구 프로젝트를 시작한 팀에게 데이터 분석을 배워 마스터할 수 있습니다. Databricks Academy를 통해 빠르게 실력을 키우세요.지금 기술을 업그레이드하세요시작할 준비가 되셨나요?문의제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks
채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
|
https://www.databricks.com/dataaisummit/speaker/milos-colic
|
Milos Colic - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMilos ColicTech Lead EMEA Public Sector - Sr. Solutions Architect at DatabricksBack to speakersMilos Colic is Tech Lead for Public Sector UK&I at Databricks. He is passionate about big data processing and has been working with Apache Spark for more than 5 years. Milos has co-authored Mosaic framework built on top of Spark to process geospatial data efficiently at large scales. He has been very active on the Databricks blog writing about Data Products, FAIR Standards, Geospatial, Data Linking, Personalization in Retail Banking, GDPR, data sharing and more.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/menglei-sun/#
|
Menglei Sun - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMenglei SunSenior Software Engineer at DatabricksBack to speakersMenglei Sun is a senior software engineer at Databricks working on data lineage and data discovery related projects. Previously, Menglei worked at Houzz, BlackRock on data infrastructure, data engineering and platform.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/dael-williamson/#
|
Dael Williamson - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingDael WilliamsonField CTO at DatabricksBack to speakersAs the EMEA CTO for Databricks, Dael provides thought leadership and guidance for the C-level executives at major customers. Prior to joining Databricks, Dael was the Global Data Technology Lead at Avanade/Accenture. An entrepreneurial CTO and Business Platform Economist focussed on digital, data & AI led business transformations across different industries. A published data scientist in the field of protein molecular modelling, with extensive experience working in start-ups and enterprises.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/anup-segu/#
|
Anup Segu - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAnup SeguCo-Head of Data Engineering at YipitDataBack to speakersAnup is the Co-Head of Data Engineering at YipitData, a leader in the alternative data industry that provides data insights for investment firms and corporations. At YipitData, Anup helped found its Data Engineering department, architected its petabyte-scale data platform, and drove adoption of data analytics and spark across the company. Previously, Anup worked in investment banking at Citigroup and studied at Indiana University.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/solutions/accelerators/on-shelf-availability
|
Improve On-Shelf Availability- 2021 | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWSolution AcceleratorImprove On-Shelf AvailabilityPre-built code, sample data and step-by-step instructions ready to go in a Databricks notebookGet startedUse AI out-of-stock modeling to improve on-shelf availabilityOut of stock (OOS) is one of the biggest problems in retail. This Solution Accelerator shows how OOS can be solved with real-time data and analytics by using the Databricks Lakehouse Platform to solve on-shelf availability in real time to increase retail sales. The accelerator can also be used for supply chain solutions.Use real-time insights to rapidly respond to demandDrive more sales with on-shelf availabilityScale-out your solution to accommodate any size operationDownload notebooksResourcesWebinarLearn moreeBookLearn moreBlogRead nowDeliver innovation faster with Solution Accelerators for popular data and AI use cases across industries. See our full library of solutionsReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/it/university
|
Databricks University Alliance for Aspiring Data Scientists | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWUniversity AllianceResources and materials for educators, students and aspiring data scientists who want to build with DatabricksJoin as an educatorI'm a studentAt Databricks, we believe that university students should learn the latest data science tools to enhance their value in the workforce upon graduation. The Databricks University Alliance provides complimentary assets to educators and students for teaching and learning these next-generation tools in both in-person and virtual classrooms.All approved educators and faculty receive:Access to the online community of educators using Databricks in the classroomA curated list of resources for educators getting started with Databricks, including slides and workshops on topics like Delta Lake and Apache Spark™Sample notebooks to jump-start the learning experience for Delta Lake, MLflow and moreBrowse notebooksSelect classes needing high-scale computing resources may request free Databricks credits and cloud credits for courses powered by AWS and Azure (limited availability)Interested in teaching Databricks?If you’re an educator or faculty member at a university, you are invited to join the Databricks University Alliance.Join now“Having access to industry-leading tools and programs provided by Databricks, a company that continues to drive innovation across the data science and machine learning community, is very exciting for our professors, students and university.”— Kyle Hamilton, professor and coordinator of the Machine Learning at Scale course at UC BerkeleyAre you a student or aspiring data scientist?You don’t need to wait to start learning Databricks. Check out the resources available to you right now.Learning seriesHands-on workshopsThis self-paced online workshop series is for anyone and everyone interested in learning about data analysis. No previous programming experience required.Learn moreDatabricks accountSign up for the Databricks Community EditionSign up for a free Databricks account to follow along with tutorials and experiment with data.Sign up freeDatabricks AcademyAccess free self-paced courses on DatabricksIf you’re a student with a university-provided email address, you can access courses on Databricks Academy free.See detailsOur cloud partners
ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/kr/solutions
|
Databricks 솔루션즈 액셀러레이터 - Databricks 사용 사례Skip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시)
안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다.
데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오.
지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)
제조업을 위한 레이크하우스 살펴보기
코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일
직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOW산업용 Databricks각 업종에 맞는 최고의 데이터 분석 및 AI 솔루션 제공시작하기데모 예약산업별 레이크하우스 알아보기커뮤니케이션, 미디어 및 엔터테인먼트더 많은 관심을 받고, 영감을 늘릴 수 있는 솔루션방법 알아보기금융 서비스신뢰가 커지고 안심할 수 있는 솔루션방법 알아보기의료 서비스 및 생명 공학더 나은 치료를 발견하고 제공할 수 있는 솔루션방법 알아보기소매 및 소비재고객 여정을 이끌고 브랜드를 홍보할 수 있는 솔루션방법 알아보기모든 산업 둘러보기searchHide filtersIndustry🤔No results available. Try adjusting the filters or start a new search.reset the list산업 솔루션2주 이내에 구상에서 개념 증명 단계까지 완료Databricks 솔루션즈 액셀러레이터는 모든 기능을 갖춘 노트북과 모범 사례를 포함한 전용 가이드를 통해 빠른 성과를 제공합니다. Databricks 고객은 발견, 설계, 개발, 테스트 시간을 단축해, 대부분 2주 이내에 구상에서 개념 증명(PoC) 단계까지 완료했습니다.액셀러레이터 둘러보기시작할 준비가 되셨나요?무료 평가판을 통해 Databricks에서 무엇을 제공하는지 알아보세요.평가판 시작하기제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks
채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
|
https://www.databricks.com/blog/2021/05/10/improving-customer-experience-with-transaction-enrichment.html
|
Improving Customer Experience With Transaction Enrichment - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorImproving Customer Experience With Transaction Enrichmentby Milos ColicMay 10, 2021 in Engineering BlogShare this postThe retail banking landscape has dramatically changed over the past five years with the accessibility of open banking applications, mainstream adoption of Neobanks and the recent introduction of tech giants into the financial services industry. According to a recent Forbes article, millennials now represent 75% of the global workforce, and "71% claim they'd "rather go to the dentist than to take advice from their banks". The competition has shifted from a 9 to 5 pm brick-and-mortar branch network to winning over digitally savvy consumers who are becoming more and more obsessed with the notions of simplicity, efficiency, and transparency. Newer generations are no longer interested to hear generic financial advice from a branch manager but want to be back in control of their finances with personalized insights, in real time, through the comfort of their mobile banking applications. To remain competitive, banks have to offer an engaging mobile banking experience that goes beyond traditional banking via personalized insights, recommendations, setting financial goals and reporting capabilities – all powered by advanced analytics like geospatial or natural language processing (NLP).These capabilities can be especially profound given the sheer amount of data banks have at their fingertips. According to 2020 research from the Nilson Report, roughly 1 billion card transactions occur every day around the world (100 million transactions in the US alone). That is 1 billion data points that can be exploited every day to benefit the end consumers, rewarding them for their loyalty (and for the use of their data) with more personalized insights. On the flip side, that is 1 billion data points that must be acquired, curated, processed, categorized and contextualized, requiring an analytic environment that supports both data and AI and facilitates collaboration between engineers, scientists and business analysts. SQL does not improve customer experience. AI does.In this new solution accelerator (publicly accessible notebooks are reported at the end of this blog), we demonstrate how the lakehouse architecture enables banks, open banking aggregators and payment processors to address the core challenge of retail banking: merchant classification. Through the use of notebooks and industry best practices, we empower our customers with the ability to enrich transactions with contextual information (brand, category) that can be leveraged for downstream use cases such as customer segmentation or fraud prevention.Understanding card transactionsThe dynamics of a card transaction are complex. Each action involves a point of sales terminal, a merchant, a payment processor gateway, an acquiring bank, a card processor network, an issuer bank and a consumer account. With many entities involved in the authorization and settlement of a card transaction, the contextual information carried forward from a merchant to a retail bank is complicated, sometimes misleading and oftentimes counter-intuitive for end consumers and requires the use of advanced analytics techniques to extract clear Brand and Merchant information. For starters, any merchant needs to agree on a merchant category code (MCC), a 4 digit number used to classify a business by the types of goods or services it provides (see list). MCC by itself is usually not good enough to understand the real nature of any business (e.g. large retailers selling different goods) as it is often too broad or too specific.Merchant Category Codes (Source: https://instabill.com/merchant-category-code-mcc-basics/)In addition to a complex taxonomy, the MCC is sometimes different from one point of sales terminal to another, even given the same merchant. Relying only on MCC code is not sufficient enough to drive a superior customer experience and must be combined with additional context, such as transaction narrative and merchant description to fully understand the brand, location, and nature of goods purchased. But here is the conundrum. The transaction narrative and merchant description is a free form text filled in by a merchant without common guidelines or industry standards, hence requiring a data science approach to this data inconsistency problem. In this solution accelerator, we demonstrate how text classification techniques such as fasttext can help organizations better understand the brand hidden in any transaction narrative given a reference data set of merchants. How close is the transaction description "STARBUCKS LONDON 1233-242-43 2021" to the company "Starbucks"?An important aspect to understand is how much data do we have at our disposal to learn text patterns from. When it comes to transactional data, it is very common to come across a large disparity in available data for different merchants. This is perfectly normal and it is driven by the shopping patterns of the customer base. For example, it is to be expected that we will have easier access to Amazon transactions than to corner shop transactions simply due to the frequency of transactions happening at these respective merchants. Naturally, transaction data will follow a power law distribution (as represented below) in which a large portion of the data comes from a few merchants.Our approach to fuzzy string matchingThe challenge of approaching this problem from fuzzy string matching is that simple, larger parts of the description and merchant strings do not match. Any string-type distance would be very high and, in effect, any similarity very low. What if we changed our angle? Is there a better way to model this problem? We believe that the problem outlined above would better be modeled by document (free text) classification rather than string similarity. In this solution accelerator, we demonstrate how fasttext helps us efficiently solve the description-to-merchant translation and unlock advanced analytics use cases.A popular approach in recent times is to represent text data as numerical vectors, making two prominent concepts appear: word2vec and doc2vec (see blog). Fasttext comes with its own built-in logic that converts this text into vector representations based on two approaches, cbow and skipgrams (see documentation), and depending on the nature of your data, one representation would perform better than the other. Our focus is not on dissecting the internals of the logic used for vectorization of text, but rather on the practical usage of the model to solve text classification problems when we are faced with thousands of classes (merchants) that text can be classified into.Generalizing approach to card transactionsTo maximize the benefits of the model, data sanitization and stratification are key! Machine learning (ML) simply scales and performs better with cleaner data. With that in mind, we will ensure our data is stratified with respect to merchants. We want to ensure we can provide a similar amount of data per merchant for the model to learn from. This will avoid the situation in which the model would bias towards certain merchants just because of the frequency at which shoppers are spending with them. For this purpose we are using the following line of code:
result = data.sampleBy(self.target_column, sample_rates)
Stratification is ensured by Spark sampleBy method, which requires a column over whose values stratification will occur, as well as a dictionary of strata label to sample size mappings. In our solution, we have ensured that any merchant with more than 100 rows of available labeled data is kept in the training corpus. We have also ensured that zero class (unrecognized merchant) is over-represented in the 10:1 ratio due to higher in-text perplexity in the space of transactions that our model cannot learn from. We are keeping zero class as a valid classification option to avoid inflation of false positives. Another equally valid approach is to calibrate each class with a threshold probability of the class at which we no longer trust the model-produced label and default to the "Unknown Merchant" label. This is a more involved process, therefore, we opted for a simpler approach. You should only introduced complexity in ML and AI if it brings obvious value.From the cleaning perspective, we want to ensure our model is not stifled by time spent learning from insignificant data. One such example is dates and amounts that may be included in the transaction narrative. We can't extract merchant-level information based on the date that transaction happened. If we add to this consideration that merchants do not follow the same standard of representation when it comes to dates, we immediately conclude that dates can safely be removed from the descriptions and that this action will help the model learn more efficiently. For this purpose, we have based our cleaning strategy on the information presented in the Kaggle blog. As a data cleaning reference, we present the full logical diagram of how we have cleaned and standardized our data. This being a logical pipeline the end-user of this solution can easily modify and/or extend the behavior of any one of these steps and achieve a bespoke experience.After getting the data into the right representation, we have leveraged the power of MLflow, Hyperopt and Apache Spark™ to train fasttext models with different parameters. MLflow enabled us to track many different model runs and compare them. Critical functionality of MLflow is its rich UI, which makes it possible to compare hundreds of different ML model runs across many parameters and metrics:For a reference on how to parameterize and optimize a fasttext model, please refer to the documentation. In our solution, we have used the train_unsupervised training method. Given the volumes of merchants we had at our disposal (1000+), we've realized that we cannot properly compare the models based on one metric value. Generating a confusion matrix with 1000+ classes might not bring desired simplicity of interpretation of performance. We have opted for an accuracy per percentile approach. We have compared our models based on performance on median accuracy, worst 25th percentile and worst 5th percentile. This gave us an understanding of how our model's performance is distributed across our merchant space.As a part of our solution we have implemented integration of fasttext model with MLflow and are able to load model via MLflow APIs and apply the best model at scale via prepackaged Spark udfs as in code below:
logged_model = f'runs:/{run_id}/model'
loaded_model = mlflow.pyfunc.load_model(logged_model)
loaded_model_udf = mlflow.pyfunc.spark_udf(
spark, model_uri=logged_model, result_type="string"
)
spark_results = (
validation_data
.withColumn('predictions', loaded_model_udf("clean_description"))
)
This level of simplicity in applying a solution is critical. One can rescore historical transactional data with several lines of code once the model has been trained and calibrated. These few lines of code unlock customer data analytics like never before. Analysts can finally focus on delivering complex advanced data analytics use cases in both streaming or batch, such as customer lifetime value, pricing, customer segmentation, customer retention and many other analytics solutions.Performance, performance, performance!The reason behind all this effort is simple: obtain a system that can automate the task of transaction enrichment. And for a solution to be trusted in automated running mode, performance has to be on a high level per merchant. We have trained several hundred different configurations and compared these models with a focus on low performer merchants. Our 5th lowest percentile accuracy achieved was at around 93% accuracy; our median accuracy achieved was at 99%. These results give us the confidence to propose automated merchant categorization with minimal human supervision.These results are great, but a question comes to mind. Have we overfitted? Overfitting is only a problem when we expect a lot of generalization from our model, meaning when our training data is only representing a very small sample of reality and new arriving data wildly differs from the training data. In our case, we have very short documents with grammars of each merchant that are reasonably simple. On the other hand, fasttext generates ngrams and skipgrams, and in transaction descriptions, this approach can extract all useful knowledge. These two considerations combined indicate that even if we overfit these vectors, which are by nature excluding some tokens from knowledge representation, we will generalize nevertheless. Simply put, the model is robust enough against overfitting given the context of our application. It is worth mentioning that all the metrics produced for model evaluation are computed over a set of 400,000 transactions, and this dataset is disjoint from the training data.Is this useful if we don't have a labeled datasetThis is a difficult question to answer with yes or no. However, as a part of our experimentation, we have formulated a point of view. With our framework in place, the answer is yes. We have performed several ML model training campaigns with different amounts of labeled rows per merchant. We have leveraged MLflow, Hyperopt and Spark to both train different models with different parameters and train different models with different parameters over different data sizes and cross-reference them and compare them over a common set of metrics.This approach has enabled us to answer the question: What is the smallest number of labeled rows per merchant that I need to train the proposed model and score my historical transactional data? The answer is: as low as 50, yes, five-zero!With only 50 records per merchant, we have maintained 99% median accuracy and the 5th lowest percentile has decreased performance by only a few percentage points to 85%. On the other hand, the results obtained for 100 records per merchant dataset were 91% accuracy for the lowest 5th percentile. This only indicates that certain brands do have a more perplexed syntax of descriptions and might need a bit more data. The bottom line is that the system is operational at great median performance and reasonable performance in edge cases with as few as 50 rows per merchant. This makes the entry barrier to merchant classification very low.Transaction enrichment to drive superior engagementWhile retail banking is in the midst of transformation based on heightened consumer expectations around personalization and user experience, banks and financial institutions can learn a significant amount from other industries that have moved from wholesale to retail in their consumer engagement strategies. In the media industry, companies like Netflix, Amazon and Google have set the table for both new entrants and legacy players around having a frictionless, personalized experience across all channels at all times. The industry has fully moved from "content is king" to experiences that are specialized based on user preference and granular segment information. Building a personalized experience where a consumer gets value builds trust and ensures that you remain a platform of choice in a market where consumers have endless amounts of vendors and choices.Learning from the vanguards of the media industry, retail banking companies that focus on banking experience rather than transactional data would not only be able to attract the hearts and minds of a younger generation but would create a mobile banking experience people like and want to get back to. In this model centered on the individual customer, any new card transaction would generate additional data points that can be further exploited to benefit the end consumer, drive more personalization, more customer engagement, more transactions, etc. -- all while reducing churn and dissatisfaction.
Although the merchant classification technique discussed here does not address the full picture of personalized finance, we believe that the technical capabilities outlined in this blog are paramount to achieving that goal. A simple UI providing customers with contextual information (like the one in the picture above) rather than a simple "SQL dump" on a mobile device would be the catalyst towards that transformation.In a future solution accelerator, we plan to take advantage of this capability to drive further personalization and actionable insights, such as customer segmentation, spending goals, and behavioral spending patterns (detecting life events), learning more from our end-consumers as they become more and more engaged and ensuring the value-added from these new insights benefit them.
In this accelerator, we demonstrated the need for retail banks to dramatically shift their approach to transaction data, from an OLTP pattern on a data warehouse to an OLAP approach on a data lake, and the need for a lakehouse architecture to apply ML at an industry scale. We have also addressed the very important considerations of the entry barrier to implementation of this solution concerning training data volumes. With our approach, the entry barrier has never been lower (50 transactions by a merchant).Try the below notebooks on Databricks to accelerate your digital banking strategy today and contact us to learn more about how we assist customers with similar use cases. GET THE NOTEBOOKTry Databricks for freeGet StartedSee all Engineering Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/p/webinar/the-best-data-warehouse-is-a-lakehouse
|
The best data warehouse is a lakehouse | DatabricksOn DemandThe best data warehouse is a lakehouseTalks. Demos. Success stories. Q&A.Available on demandMany enterprises today are running a hybrid architecture — data warehouses for business analytics and data lakes for machine learning. But with the advent of the data lakehouse, you can now unify both on one platform.Join us to learn why the best data warehouse is a lakehouse. You’ll see how Databricks SQL and Unity Catalog provide data warehousing capabilities, fine-grained governance and first-class support for SQL — delivering the best of data lakes and data warehouses.Hear from Databricks Chief Technologist Matei Zaharia and our team of experts on how to:Ingest, store and govern business-critical data at scale to build a curated data lake for data warehousing, SQL and BIUse automated and real-time lineage to monitor end-to-end data flowReduce costs and get started in seconds with on-demand, elastic SQL serverless computeHelp every analyst and analytics engineer to ingest, transform and query using their favorite tools, like Fivetran, dbt, Tableau and Power BIThis presentation will come to life with demos, success stories and best practices learned from the field, while interactive Q&As will help you get all your questions answered from the experts.SpeakersMatei ZahariaCo-founder and Chief TechnologistDatabricksShant HovsepianPrincipal Software EngineerDatabricksMiranda LunaStaff Product ManagerDatabricksFranco PatanoLead Product SpecialistDatabricksTranscript– [Shant Hovsepian] Hello, everyone. Thank you so much for taking time out of your busy days to join us today. We're going to talk about the best data warehouse is a lakehouse. We'll be joined by Haley Creech, who's a product director for Enterprise Data and Analytics Platforms at GSK. Me, I'm Shant Hovsepian, software engineer at Databricks. We'll be joined by a few of my colleagues from Databricks, Miranda Luna, senior staff product manager, Franco Patano, lead product specialist, and Matei Zaharia, CTO and one of the co-founders of Databricks.So for the agenda today, we're going to do an introduction to the Lakehouse, Unity Catalog, and Databricks SQL. I have an amazing demo, show you how to get started, and tips and tricks. Talk a little bit about how GSK is leveraging Databricks SQL and the Lakehouse. And we'll talk through some of the lessons that we've learned from our users out there.A few housekeeping items. We will share the webinar recording after the event. Don't worry if you need to step away for something and come back. You'll have the recording handy later. You can use the Q&A box either down here or here to ask questions while I'm talking. We have a bunch of subject matter experts on standby, waiting to answer your questions. And at the end, we'll also take some time to answer any questions anyone may have.So let's get started by talking a little bit about how data and AI is disrupting entire industries. If you look at the companies here, tech companies, Amazon, Google, Apple, the FAANGS, as we call them, they're all using data and AI in strategic ways in how they run their businesses.Netflix, for example, started as a video rental mail order system, and went into streaming. They used data and AI to make recommendations of what other shows or movies people are interested in. They even took it one step further and they're a movie studio today, and they use the data and insights from which parts of films and shows their customers like to help produce better movies. So if it weren't for data and AI, Netflix probably wouldn't exist today. We'd all still be getting videos from Blockbuster.And it's not just the tech companies. There are traditional larger enterprises, like AT&T, that are leveraging data and AI. For example, AT&T got a lot of their customer data that used to live in a data warehouse, and put it together with real-time streaming events from transactions and phone subscribers to build over a hundred machine learning models that were used to help 182 million subscribers stay safe from fraud throughout the usage of the platform.So how is it that these companies are able to leverage data and AI? Well, we use what we call the data maturity curve here to kind of understand. This is if you look at the x-axis, it shows you a company's point in their data maturity. And the y-axis, the value, the competitive advantage they're getting from their data.Most companies start all the way down here from the left, where they just get some data into the system, they clean it, they build some basic reports, ad hoc queries. Then as you move further to the right, you get into more predictive modeling or prescriptive and automated decision-making systems within the platform. So the left side kind of asks questions about the past, and the right, you sort of use a crystal ball to try to predict the future.But most companies today are still stuck on the left of that curve. They struggle to really find success at scale. And they all want to go to the right. They all want to use predictions in AI to make their businesses more efficient.Their main reason for that is most people when they start from the left, easiest way to get started, traditional way is get a data warehouse, pull your data in it, put a BI tool on top, and ask them basic questions. However, when you need to do AI and machine learning, you need a data lake. You need to get your various different datasets. Some of them may be images, video, unstructured content. And put them on a platform that can scale and let you ask different types of questions around different types of processing on it. That always runs on a data lake. These are two separate systems, and there's a giant gap between them.Really, it manifests itself in a weird way. So for example, here, we have the data lake on the right-hand side, and this is where data typically lands first in just about every organization. Raw data, raw files come in a data lake, maybe some basic processing happens, and then they have to get summarized and copied over into the data warehouse. Once they're in the data warehouse, there are modeled and structured tables, governance rules, and your traditional tools are on top.These two systems, though, painful to manage, they're very separated, and it really just... It creates a disjointed, duplicative set of data silos. And what's worse is on top of the copies of the data, you end up having incompatible, different security models where data in the data lake might be encrypted, but once it goes into the data warehouse and someone's built a table out of it, you may not have kept that encryption in there. You may have changed the format. Someone may have access to it when they shouldn't have access to it. Vice versa. Always gets complicated when you have to maintain multiple different security models.And then of course, there are the use cases. You can't just get your high performance, high concurrency BI workloads and stick it on a data lake and expect it to perform just as well. You sort of need to change something. You need to do things differently. Just like how you can't do data streaming in a data warehouse because the data lake's where that data lands, and just even by the time you get it into the data warehouse, you've already lost a lot of that opportunity to get real-time insights.But does it have to be two disparate platforms? Let's say we've got the best of the data lake. We've got the open, reliable data storage that can efficiently handle data of various types. On top of it, layered a new way to do governance, an approach that works across all your data assets and all your clouds. And integrated engine is built for today to be able to do both machine learning and BI workloads efficiently and scale as needed.This is what we call the lakehouse paradigm. And at Databricks, we use Delta Lake as the core secret sauce technology as that first layer on the system, where you can bring reliability and consistency to what was once an unreliable set of files living in cloud storage. Then we have Unity Catalog, which is the central place to do all of your governance, whether they're tables, files, blobs, machine learning models. And Databricks SQL is that engine that's built for the modern processing needs, high performance, and scalability.So this is the Databricks Lakehouse Platform. When we talk to our customers and ask them why they like it so much, one of the first things they say is that it's simple. See, when there's only one platform they need to worry about, it's one set of technology and tools they need to train everyone on, one place to put all of your use cases, it really simplifies the whole system.And multicloud. About 80% of our customers use more than one cloud today. Having one consistent interface across the various clouds, not having to learn the idiosyncrasies of the various different cloud platforms is a tremendous benefit.The last is open. All of the technology here in the lakehouse and Databricks are built on open source and open standards. This is hugely valuable to a lot of our customers who traditionally when using data platforms that come from proprietary locked in systems controlled by single vendors.And when moving to the lakehouse, everyone's always a little apprehensive. Do I have to retool everything? Do I have to change everything? That's where the openness helps even more, because the entire lakehouse architecture is built on open standards. So all of the tools that you're used to for your data governance, your data pipelines, even data ingestion, the whole modern data stack actually already works in the lakehouse paradigm because it's so easy for these ecosystem partners to integrate using the open standards and open APIs. So you get all of the tools and functionality, even your traditional BI tools, machine learning, and user-facing data science platforms working on the system and free from integration hassles.So it's not just Databricks. The rest of the industry's really taken notice about the lakehouse. You can see the various other cloud vendors have talked about their new lakehouses. Google announced a lakehouse-like offering. AWS, even Oracle. And your traditional data warehouse companies are talking about lakehouse.But how do these lakehouse offerings really stand up? Well, we wanted to test that, and so we created what's called the lakehouse benchmark. This is essentially a modification of TPC-DS, traditional data warehouse and benchmark, but we ran it on data that's living in an open format Parquet. So this is data that exists in the cloud in an open format that all of these other vendors said they now support because they're a lakehouse.When you ran this workload across these different systems, you can see here Databricks SQL came in at just $8 to run the entire three terabyte workload. Some of the other vendors, well, those in some cases, up to 30 times more expensive to run on those systems.So while they say that they're a lakehouse and they support these open formats, were they really optimized? Do they expect their users to use it? It seems unclear if there's such a huge gap.And you might say, "Well, it's not fair. These systems, while they claim to be lakehouses, weren't designed to be lakehouses from the start. So you're benchmarking them on something that they were never really built to do." That's true, so we also went one step further and got the traditional TPC-DS 10 terabyte benchmark. And this time, we ran it, but we didn't put it in an open format. We loaded the data directly to the proprietary data warehouse format that those vendors use and didn't keep it in an open storage format and open access. And even in that case, while the difference may not be 30X, you can see that Databricks came in at about $76, and this includes the time to load the data, do any optimizations, and run the queries. While some of the enterprise offerings or even the standard offering from other vendors still came in to over 3X the cost to run the same workload.What's more is even if the data is going into the proprietary database format, it probably is sitting in a data lake today. So the fact that you don't even have to load it with Databricks is a huge potential savings as well, where in these other systems, you're always going to have to be loading that data in.But let's talk a little bit beyond the benchmarks, right? Performance is really important in the cloud because it's usage-based, it's a consumption model. So the faster something runs, at the end of the day, you save money. So that performance and efficiency does deliver a lot of business impact.For example, there's a global media company we all know about. When they moved to the lakehouse, they were able to unify their subscriber data and their streaming data and build better machine learning models for personalization and recommendations. And what's even nice about this is they had a $30 million reduction in costs. And on top of that, almost $40 million in increased revenue from accelerated offerings, right? So this helped on both the bottom line of the business and top-line growth.Another Fortune 50 retailer, one of the largest in the world, was able to use their supply chain data with the streaming IoT sensors from within all of their stores to detect when food was being spoiled, when there was too much surplus, and optimize their whole supply chain end to end. This switch to the lakehouse and integrating the real-time data gave them 10X faster time to insight and saved around $100 million annually through reduced food waste.And one of my favorite examples is Atlassian. It's a product I use every day as a developer. They completely migrated all of their internal data systems over to lakehouse and stopped using data warehouses. For them, it was really about democratizing access to their entire enterprise and lowering the operational costs so that every single Atlassian employee can use the data to make informed and better decisions. This really reduced their analytics infrastructure costs by 60%, and led to a 30% better delivery time on their analysis.So at Databricks, our mission is to democratize data and AI, and our destination is the lakehouse.Now I want to introduce Matei Zaharia to talk about how to do fine-grained access control, as well as data sharing in the lakehouse. Thank you, everyone.– [Matei Zaharia] Thanks, Shant. I'm excited to talk about data governance and sharing on the lakehouse and what we're doing for those.First of all, let's start with governance. I think as anyone who's tried to work on this knows, governance for data and AI workloads today is very complex. It's mostly because you've got different kinds of systems that you have to manage. You've got data lakes, you might have a metadata system like Apache Hive, and you probably also have your data warehouse and different systems for machine learning. And then you have to figure out how to give all your users the right permissions in each system and how to audit all of it.The problem is that each system has different ways of managing governance. For example, on your data lake, you can set permissions on the level of files and directories, but it actually has some limitations. You can't set row and column level permissions, for instance.In contrast, once you start defining a table, in systems like Hive, you can set permissions on tables and views. But just turning that over a data lake doesn't guarantee that all the permissions of the underlying files match. So the permissions on the metadata can be out of sync with the data.And then when you put data in the data warehouse, that has its own permission model involving tables, columns, and rows and so on, which is great, but it's just different from what you have elsewhere in your data systems. And of course, for machine learning, all the different systems in the machine learning space have their own way of doing governance.So at Databricks, to make governance significantly simpler in the lakehouse, we've designed Unity Catalog, a single catalog and a single governance layer over all the data and AI assets in the lakehouse. The design is very simple. All the access to your storage system, whether it's data lake files, tables, machine learning models and so on, is policed by Unity Catalog. And there's a single permission model you can set up once that determines who has access to what.And to make it very easy to configure, the permissions are all set using SQL GRANT statements. So anyone who knows how to administer a data warehouse or just a SQL database can go and set permissions and actually administer the whole lakehouse, including files and machine learning.And everything is also centrally audited across these workloads. We've instrumented all of them so you can see who is doing what. And you also get fine-grained lineage across all of them to see what items are derived from other ones. That's a very powerful feature for understanding how your data is being used.So how do you use Unity Catalog? The simplest part of it is just setting up the access controls. You can just do that using SQL. So for example, you can set permissions on tables using a GRANT statement for select, but you can also set permissions on files, which is something unique with Unity Catalog. And you can also see all the activity that's happening in the system as a system table across all these things, and then see how people are using the data.Beyond that, you've got a powerful search interface where people can search across all the datasets in the whole company and can discover them and can set up comments and additional metadata about each one.And finally, you've got a very powerful lineage feature that tracks column level lineage feature through all the workloads that run on the Databricks Lakehouse Platform. So you can see, for example, for every table, what tables upstream it's derived from. But the lineage even extends to notebooks and dashboards. So you can see, for example, which dashboards are using this table, or which scheduled jobs, which data science notebooks, and so on.And because we are doing all this in the Databricks Runtime engine, we can understand the lineage at the level of individual fields. Not just datasets, but you can see which columns are derived from which ones, which is very useful for both managing just the data in your organization and also tracking any possible security problems if someone creates a derived dataset based on a sensitive column.So Unity Catalog is also designed to integrate with the best of breed products in many areas of data governance. This includes products to set advanced policies, like Immuta or Privacera, the best data ingestion products, BI and dashboarding tools, and tools for data pipelines. So you can use that with everything in the modern data stack together.And we're very excited to announce that just earlier this summer, Unity Catalog became GA. So now anyone with a Databricks account can try it out. And we've been seeing a lot of customers greatly simplify the way they manage their data with this.So Unity Catalog is one piece of the equation in making the lakehouse very easy to govern, but the other piece is collaborating across organizations or across just different groups in the same organization. For this purpose, we've also built some very powerful features for data sharing.Data sharing, if you haven't looked into it, it's often considered one of the most important aspects of a modern data platform. Many organizations are saying this is becoming increasingly important. For example, Gartner is saying that it may improve economic performance of organization by 3X, and they expect data ecosystems to grow significantly.Now, traditionally, it's been possible to do data sharing in some data warehouse platforms, but only within the context of a single platform. So for example, if you have lots of users on, say, BigQuery, these users can share data with each other, but only with other instances of BigQuery. And if someone else has lots of users on, say, Amazon Redshift, they can share data with each other across instances of Redshift, but not with other platforms.And for many organizations, this is very limiting because you do end up with different systems deployed in the organization or with partner organizations, and they aren't able to share data with each other. So this leads to vendor lock-in. And it also means that in practice, if you need to share a dataset broadly, you probably need to replicate it into many systems, many different cloud regions, and get it into all of them, which is just expensive to do.So at Databricks, we took a very different approach, sticking to the lakehouse philosophy of openness, and we designed an open protocol for data sharing that's called Delta Sharing, which means that it can work across computing platforms. You don't have to be sharing with someone on Databricks. Anyone who can run open source systems like Spark, Pandas, Power BI or just applications in Java can actually consume the data.The way it works is very straightforward. The data provider can simply set up tables to be shared using SQL commands, but then the data consumer can connect to them from many different platforms, not just from Databricks, and can actually read the data in there. And it's all based on the open source Delta lake file format. So basically, any tool that includes that library can start receiving data. So it enables cross-platform sharing, and you can just do it on your existing tables.Delta Sharing's rapidly being embraced throughout the industry. We're already seeing petabytes of data shared per day, even though the product only started preview less than a year ago.Just as an example, Nasdaq has been using it to streamline delivery of some very large datasets they have in the cloud that they just couldn't share through other means. It was just too expensive to replicate it into other platforms or share it through FDP or things like that.And Shell has also been using it to share datasets with other companies it collaborates with to streamline production and make it more efficient. Again, massive datasets that now any platform that can run Spark or one of these other tools can immediately access.Delta Sharing has also been evolving rapidly. There are a whole bunch of new connectors that launched this year. The support for tables that change over time and sharing a changed data feed where you can see just which rows changed. And also within Databricks, it's very easy to set up sharing with someone else on Databricks, say, just through a couple of clicks.So Delta Sharing is also a growing piece of the lakehouse. We're working on some powerful features that we're also incorporating in the open source project, including sharing views that can filter down the data, and also sharing streams so that a streaming client can consume the changes in real-time and react to data that's being shared.Delta Sharing is also GA as of this summer, so we invite you to check it out. And together with Delta Sharing and with Unity Catalog, you end up having very powerful control of governance, both within an organization and across organizations that collaborate.So that's it for the governance and sharing features. Now back to Shant to talk about some of the powerful warehousing and BI features in the lakehouse.– [Shant Hovsepian] Thanks, Matei. Hi, everyone. It's me, Shan't, again. I'm going to spend some time now talking to you about how we can do data warehousing and analytics on the Lakehouse.Of course, all of that's possible with Databricks SQL. It's our SQL warehouse offering on top of the lakehouse. You can think of it as the house in lakehouse. I'm going to spend some time talking about how you can connect your data stack up, how we combine the best of data lakes and data warehouses, and of course, how we obsess over bringing the most value possible through performance.So let's talk about connecting your data stack. When we set out to build Databricks SQL, we wanted a first-class SQL experience. So all of your existing data warehouse tools that are built around the whole SQL ecosystem can just work out of the box in the lakehouse. So whether it's ingestion from business critical applications, transformation of that data using standard transformation toolkits, or just consumption and querying of that in your business applications, we've got you covered with Databricks SQL.First, let's talk a little bit about data ingestion. Basically, the ability to work with your data no matter where it is. We've gone and done tons of work with Fivetran to make sure that their traditional data warehouse integration work just as well on the lakehouse, making it easy to ingest business critical data from Marketo, Salesforce, Google Analytics. Whatever your source may be, it works seamlessly out of the box.When it comes to data transformation, we've done similar integrations with dbt. Open source connector that you can use with Databricks and run all of your dbt pipelines, your transformations, and collaboratively explore and transform your dbt projects on the lakehouse.And then data consumption, of course. What good is the data that you just load into the system if you can't give it to your users to get insights and build new business value from it? So we've worked with a ton of the existing BI vendors out there to make sure that all of their connectors are lakehouse certified, so you can get the best out of the box experience possible.But of course, BI tools aren't just it. When it comes to business applications, sometimes you need to write code and integrate the data right where it's going to be consumed, which is why we're excited now to support our ability to run SQL from anywhere. Build custom data applications directly using a REST API to run SQL queries, or use one of our new open source connectors from your favorite programming language. Everything from Go, Node.js, and of course, Python, basic command line connectivity, and traditional Java JDBC stack are all available to connect to directly today.So how are we combining some of the best of data warehousing and data lakes together? Well, we've come out with Python user-defined functions. Databricks had always made Python a first-class citizen when it comes to your data science, AI, and engineering workloads. But now you have the power and flexibility of Python when you need it, calling it directly from your SQL. So it's available in the preview today. You can check out the link here, use it for your machine learning models, your data transformation pipelines, directly from your favorite SQL tools.Query federation. Now, the lakehouse is home to all your data sources, no matter where they are. And with query federation functionality, now you can directly connect to multiple different data sources no matter where they live. You can connect to remote databases without having to worry about ingesting and moving data. You can combine multiple data sources across these things, or even treat some data sources as a reference dataset or a dimension table while your full dataset is in the data lake. And all of this is seamless, and our query optimization engine automatically pushes down the parts of the query that are relevant to each of the different data sources.Materialized views, of course. Materialized views are essential for speeding up queries when you want to use pre-computed results. And while materialized views have been around data warehouses for a while, what's new and interesting is we've taken some of the best technology Databricks has for streaming workloads, and we've built our materialized views on top of that. So you don't have to worry about picking refresh intervals or measuring the trade-off between data latency and correctness. With our materialized views, you can choose. You can have your data in view to be always up to date as needed, or dial that back and simplify your ETL and data transformation pipelines.We've added new support for data modeling with constraints. This is very familiar in traditional data warehouse techniques. You can have your primary and foreign key constraints defined to understand and visualize the relationships between your tables, support IDENTITY columns, which automatically generate unique integer values for all the new rows added to your tables, and most importantly, enforced CHECK constraints so you never have to worry about data correctness or quality issues. As the data is loaded, we'll check those constraints and give an error if the system doesn't meet the criteria you've defined in your constraint. This is very unique-– [Shant Hovsepian] .... meet the criteria you've defined in your constraint. This is very unique and no other cloud data warehouse provides this functionality today.We recently added geospatial support, so now you can supercharge your geospatial processing. We're using a unique engine based on H3, so you have very efficient storage of your spatial data, whether it's large or small. You get high performance spatial joins. And by having a direct SQL interface now to the geospatial engine, you can combine your AI, your advanced GPU data visualization and your simple SQL analytics together in the same platform, without having to switch between tools anymore.We also added information schema support. So now, all of your metadata is simply a query away. And what's even more interesting, with Unity Catalog, you can have an information schema table that defines relationships between your catalogs, your schemas, your columns, even your unstructured data sources, like locations for files and cloud storage, as well as machine learning models.So, let's talk a little bit about some of the world-class performance features we've incorporated in Databricks SQL to make it a competitive and performing engine on the lake house.So, Photon is our next generation query execution engine, that we built from the ground up. You can see in this chart over here, with every release of the Databricks runtime, we're constantly getting better and better performance. But eventually, things start to flatten out. We sort of hit a plateau. And it was at that point that we knew we needed to do a whole new paradigm shift to get to the next level of performance. And that's Photon. Here it is. Huge performance improvement from the release of this technology.So, what is Photon? It's essentially an MPP Execution model, written in native code, leveraging vectorized CPU primitives. It's a lot of technical language for saying, it was built from scratch to be really fast. So far, it's processed exabytes of data, billions of queries, and it's not just about performance. Our customers are getting faster interactive workloads, they've seen their ETL jobs run at one fifth the cost, and they're seeing on average about a 30% savings in TCO. Sometimes as high as 80%, some of their workloads.And if you are interested in learning more about Photon, we published a paper recently at the SIGMOD conference. We're very humbled and fortunate enough to win the Best Industry Paper Award, and it really details all the inner workings and details of how Photon works. Feel free to check it out and get more information.And so, when we talk about performance and we talk about Photon and all of the things we've done in the system, we really wanted to go back to build things from the ground up, so we can have amazing performance on both the data warehousing workloads and a lot of the AI/ML workloads that you would normally get on the data lake.And so over here on the left-hand side, we have the 100 terabyte TPC-DS price/performance benchmark. So this was something where Databricks set the official data warehousing performance record. You can read more about it over here. This was a fully audited benchmark, where Databricks ran 100 terabytes, a large intensive, designed for data warehousing workloads, benchmark. And we essentially looked at the price/performance. How much did it cost to run this benchmark? And with Databricks SQL, it was done at about one twelfth what some of the highest competitors were able to do. So a lot of work is done into making these stressful, difficult data warehouse workloads fast.But I also want to draw your attention to this chart over here on the right side, where here, it's a much smaller data set. It's 10 gigabytes, not 100 terabytes. So traditionally, you wouldn't think of data lakes as a platform that made sense for small data sets, but when it comes to data warehousing and BI, many cases, you have a mix of small data and large data sets, and we wanted to make sure our performance was awesome regardless of the data set size. So here, you can see that while we started out running this concurrent query benchmarks, we're running a bunch of queries concurrently, measuring how many of them can complete in an hour on a small data set. When we started, it wasn't too great, around 4,000 queries an hour. But since then, we've seen a 3x improvement. Now, not just meeting the other data warehouse competitors results, but exceeding them in many cases.And one of the ways that we've been able to get such great performance for concurrency is having an elastic infrastructure that can scale with query and workload demand. And that's possible with Databricks Serverless. So, Databricks SQL runs best with Serverless. It gives improved agility at a lower cost for our users. It's instant, elastic compute. You get fast query execution, you can scale nearly instantly. For the admins, it's zero management. You don't have to worry about pools and reserving instances and dealing with a lot of the cloud compute complexities that can happen when there are demand and supply issues. And overall, for IT budget, it means lower TCO. You don't have to over-provision anymore, and you can automatically shut everything down when things are idle.But of course, we had to spend a lot of time and effort making sure that we're optimizing away that idle time. Because when you have a serverless environment, if there's nothing running, no queries, no user interacting with it, we want to shut those machines down so you don't have to get charged for it. So we've done a lot of work then to make sure once they're shut down, the next query that comes in doesn't have to pay any warmup costs, to read data from cloud storage or to recompute certain results. So a lot of the concurrency benefits in the elasticity of scaling with Serverless were made possible because as soon as you need more resources, as soon as you get a lot of concurrency, we can spin them up and they can execute queries immediately.When we first started, that first query that you would bring in on a new instance would still take about 40 to 50 seconds because of warmup and caching effects. Today with Serverless SQL, now that first query can get up and running within 10 seconds of a new instance starting up. And pretty soon, we're working on a new persistent caching feature where you can have your new compute resources added to your warehouse immediately, within three seconds, to handle the new workloads.And so with that, I want to conclude that the best data warehouse is a lake house. I'm going to stop talking about why, and benchmark results and all of these things, and let Miranda show you exactly why we think the best data warehouse is a lake house. Thank you.– [Miranda] Hi, everybody. My name's Miranda and I'm a product manager on the Databricks SQL team. As you've heard today, the best way to warehouse is in fact a lake house, because it marries the speed of the data warehouse with the scale and flexibility of the data lake. So today, I'm going to walk you through DB SQL top to bottom. We'll start by creating a serverless SQL warehouse and then explore some of our data, it's lineage, it's governance. We'll then analyze that data and DB SQL's built-in SQL Editor and take a look at new features like query history, materialized views and Python UDFs. We'll wrap by switching over to our favorite BI tools and interacting with our lake house data live from those environments, all powered by serverless warehouses. We've got a lot of fun ahead of us, so let's get started.I'm going to start by creating a serverless warehouse. Serverless warehouses come online in seconds and are optimized for high concurrency BI workloads. That's exactly what we're doing today, so I'm going to create one. We'll go ahead and make a new one called Acme. And as soon as I get that confirmation, I'm going to go ahead and switch over to the data tab. Now, why don't we pick Acme from here? Let's see what's going on. As I... Oh, already started. Great. And as you'll see here, I have a number of different catalogs, each with different schemas and tables within. This is all governed by Unity Catalog in a central manner, and today we're going to show exactly how, by creating a new one ourselves. So I'm going to go ahead and start by creating a new catalog. Let's call it, oh, acme_avo, since we've got some avocado data. We'll go ahead and hit create.And you'll notice a couple things have happened here. First, default schema has been created, there's no data there. Why don't we go ahead and start by giving Shant and Matei permissions on this. Let's see, Shant. And I forget if I have a second user on here. Okay, great. Let's just give it all on this particular schema. You'll also notice that I have an information schema that contains a number of different views on all of the changes that have happened in my catalog. So for example, if I go to schema privileges, I'll be able to see those changes I just made to grant permission on the default schema to Matei and Shant. And while we're at it, why don't we just make sure that they have at least usage on the entire catalog? Just thinking ahead, I want to make sure that they've got everything they need and they're not going to need me to intervene anymore. So let's go ahead and give them usage.Great. Now again, if I go and look at my catalog privileges information schema view, I should be able to see that that would just execute it successfully. All the information I need to really audit changes and permissions and data are going to be captured right here. Now, it's not just easy to see the changes in permissions, it's actually easy to bring data into Databricks as well. So certainly, one option is I can always ingest from a partner. Another option would be to use a technology like Delta Live Tables. But today, let's do something simple. Let's create a table from a local CSV that I have here on my machine, that has avocado prices. I'm going to go ahead and select Acme and make sure we're in the default schema. Great.Excellent. And now you can see that the table UI gives me a few helpers. One, I can see the type, that's kind of auto-assigned to each of those columns. It's going to be coming in from the CSV. Everything looks good to me, so I'm going to go ahead and I'm going to hit create, and we will have a new table in our default schema. As soon as that's done creating, it drops me right back into that new table in the data explorer. Now I'm going to do a couple things, so we can take a look at lineage.Great, so this opens it in a new query and I can go ahead and hit run. What that's going to do is it's going to go ahead and just give me a preview of that table if I want to. I'm going to just save this and say, avocado test. I'm going to let that continue and then I'm going to pop back to the data explorer.And the other action I can take is I can create a quick dashboard. So I can pick a couple different fields of interest, and I'll go ahead and hit create there. By default, that's going to create a new dashboard for me that shows me a couple interesting stats about my data. But the reason I wanted to do all that is so I could show you a little bit more about the lineage of this exact table. So, one, if I come down to dashboards, you'll see downstream lineage, the quick dashboard, I can go ahead and open it directly from in here. That's the same dashboard we were just on. The other thing that I can do is I can come and click on this table insights button, to understand a little bit more about who the frequent users are. No surprise, there I am, we just created this table. As well as some of the frequent queries, and I can open any one of those in the SQL editor with just a click of a button.Now that we've got a good handle on the compute, data and governance side of the lake house, why don't we actually spend a little time here in this SQL Editor and learn a bit more about what we can do in terms of analyzing our data? If I want to use a third party SQL workbench, I can do that via ODBC, JDBC connection. But today, let's take a closer look at the experience right within Databricks, side by side with your data. One thing that I really like about the editor here is that it's super trivial for me to just pick a few columns of interest and go ahead and expand those. If I want to format the query, I can do that with a click of a button. Also, definitely realize that not everybody loves buttons, so we have a number of different keyboard shortcuts available. Super easy to review the entire list of them.And then I can go ahead, and just like most other SQL editors, I can go ahead and add a visualization if I'm interested in that. And when I have a few columns, we actually suggest one out of the box for you. I can toggle through a few different views, if I'm interested in that.Now, all these executions that we've been running today, I can review in past execution. So anything that's in the current query or in all queries across all the work I've been doing, which is a really nice way to go ahead and take a look at anything. If I did something silly like put a number of extra commas first, it's going to kind of give me a little bit of a warning there. But then if I try to run it, I'm going to get that sort of syntax error highlighting that I'm used to. I'm going to see that this is the issue, row two is the row with the syntax error. And I can even see it's this first duplicate comma that's the issue. So I go ahead, and if I didn't want to just modify this, I could actually go ahead and click here and open up a past execution. If I wanted to resume editing this, I would go back to my original one. But it's pretty trivial to go back in time and see what were the versions that were working.Now, I'm going to go ahead and open up another query that I have, and this one is going to be for national revenue trends. So let's go ahead and pull that up. This is on TPC-DH data. And let's go ahead and add another country, just so we know that this is not going to be coming from cache or anything like that. I'm going to go ahead and remove the limit 1000 I had when I was working earlier, and let's just run that. Now, you'll notice as this starts to execute, I can pull up a little bit more information about what's actually happening with my query. And then as it completes, I'm going to actually be able to access a lot more of a detailed breakdown.Now, I pulled this up while the query was executing, but as it completes, I can always get it back that way. And I can go ahead and I can pull up the query profile, and very quickly, I can see where time is being spent, whether I prefer the graph view, the tree view. And I can kind of understand exactly what the different elements of my query performance are. Obviously, this is a pretty quick query, we're not too concerned with what we see here. But if I did have a longer running one or somebody else in my organization was running into some trouble, I could zoom in and kind of really understand exactly what was going on, and help them correct any sort of inefficiency that they might have in their query.Now, this is kind of the profile for one single query, but certainly as an administrator, it's helpful to have an understanding of what's going on across your entire workspace. So we also offer this query history area, where at a glance, I can see everything that's happening. And I'm an admin on this workspace, but I can see everything that's happening across all the different warehouses. Or I could drill down into just one. I can look at myself or all users. I've been the most active recently, so I would expect to... Yep, there I am. See myself.But really helpfully, the other thing that we can do is also filter by status. So if I want to see all the queries that failed, I can really easily drill down to those. You can imagine too, that if I wanted to understand any sort of queue or queries that haven't finished running, I could go ahead and pull up all the queries that are running and sort by the duration that they've been out and open. That's going to help me, as an administrator, really identify anywhere where I might want to engage a particular user and make sure that they're not blocked.Now let's go back to the SQL Editor, because we're going to dig into a couple new really exciting features, Python UDFs and materialized views. So let's get started. Starting with Python UDFs, I'm going to pull up a quick example that I've already created and let's walk through that together. Now, UDFs, or user-defined functions, are ways to kind of extend Vanilla Spark with custom business logic. And to date, we've supported SQL UDFs, right? So I'm going to run through a quick example of that, and then we'll get to the fun stuff in Python. But the idea here is again, in the lake house, we want you to be able to use whatever makes the most sense for your use case. Whether that is super simple, to express logic in SQL, being able to reuse that, whether that's something a little bit more complex that requires Python or that might already exist, it would just be great to use instead of kind of rewriting in a SQL format. We want to give you the tools that you need to do whatever you need to do.So let's go ahead and start by just taking a look at this table. We're going to start with a SQL UDF example. So you can see here, I just have a country, email and name fields. This SQL UDF, let's kind of do a simple example first, let's just do a little bit of masking. Let's insert some ellipses. And you could imagine, this might be a source table in the bronze layer that I have access to as a data admin, but I want to do a little bit of transformation at the silver layer before I expose that to the broader organization. I certainly want analysts in my organization to be able to understand, you know, unique number of users by email across each country, but they probably don't need the specific email themselves in order to do that.So the first thing I'm going to do is I'm going to go ahead and run this, create this function. So all this is going to do is kind of take a look at where the @ from the email is, and then insert some ellipses on either side. This is going to be SQL UDF, so you can see that our language here is SQL. And let's just take a look and... Oh, let's make sure that it's doing exactly what we expect it to. So I'm going to just take a look at the country, and then I'm going to mask the email field from emails demo. You can envision this being a super easy function to call anytime you have any sort of data in the bronze layer where you want to apply some simple masking.So we can see here, yep, I've been able to kind of mask, it'd be pretty good to get an idea of distinct users per country with this sort of masking. Again, it's not going to be as unique as an ID field, but it's helpful to at least be able to do an eyeball check. Now, this is where the fun kind of starts. This is where the new and exciting pieces come in. So here, what we're going to do is we're actually going to create a Python UDF. And all we're going to do is, if you remember that name field, we're going to go ahead and just say hi. Right? Very, very simple. Certainly know that much more common use cases are redacting PII from Nested JSON fields or calling forecasts.But today, again, we want to kind of look at everything very, very simple here in the SQL editors. So let's go ahead and do that. We're unselected. We've now created that in the email catalog default schema, and here's where I'm going to bring it all together. So you remember that mask email SQL UDF, greet Python UDF. And we're going to go ahead and we're going to call both of those UDFs in the exact same SQL query. So we're going to stick to the country, then we're going to mask the email, then we're going to add a little greeting in front of the name. So let's go ahead and run that.And just like that, we are able to get country, a masked email address again via SQL UDF, and then add a little greeting to the name field via a Python UDF. That's the power of the lake house coming together. Now, let's talk about materialized views. Another really critical feature in a data warehouse.To take a look at materialized views, let's switch over into another environment where I've loaded up an example. Materialized views reduce costs, and improve query latency by pre-computing slow queries and frequently used computations. They also enable easy to use transformations by cleaning, enriching and [inaudible 00:51:12] normalization of base tables. Overall, MVs reduce costs while providing a simplified end user experience, because they're incrementally computing changes from the base tables.And today, we're going to look at a very simple example. So first, if you recall, even though we're in a new environment, same data we're used to. So the acme_avo catalog, the default schema and the avocado table, we're going to go ahead and run that. Great. And we'll see here, that was a confirmation that was successfully executed. And we're going to do a quick check, but all I'm doing here is essentially picking a subset of my columns from that original avocado table. We're going to do the quick validation that we are actually kind of just seeing the date, total bags and region. And then what we're going to do is we're going to take this as our base table for the materialized view, and we're just going to set the table properties to kind of enable the data feed that we need. We're going to run that.Perfect, successfully executed. And now, this is where we're actually going to create the materialized view. So you see here, I'm going to create a replace that materialized view. We're going to call it avocado_mv. We're leaving avocado_mv base as a separate table, that's what we're going to build off of. And we're just going to go ahead and group by regions. We're going to end up with bags sold by region. And let's just double check. We'll first create the materialized view, then we'll double check, spot check one of our regions.Awesome, so we got the confirmation, but let's just take a look and see, for example, what the value is for Boston, all right? Boston's a big Celtics town, we're looking at avocados, avocados are green. Celtics going far in the playoffs this year, I'm sure. So let's see what comes back for that particular region. Awesome. So it looks like we are at about 21.49 million avocados, love to see it. Now, again, if you remember our data, it was over a few years. And what we're going to do now is, we have a materialized view that is computing the total bag sold per region. We took a look at what it was for Boston, but now we're going to actually show the power of the materialized view itself, and whether it's incrementally updating based on just the inserted new values.So here, all I'm going to do is insert one more row of data. What we're going to do is we're going to... And this is, again, into the base table. We're going to insert one more row, we're going to say that there was a crazy sale in September of 2022, there were an extra 5 million bags sold. So we should see this go up by about 5 million. So let's go ahead and first insert this. And then the next thing we're going to do is refresh the view. Now that view's only going to compute on this one inserted value. Cool. One inserted row, one affected row, fantastic.And now if we refresh this materialized view and we take a look at that same, look at that materialized view where the region is Boston, we should see that number go up by 5 million. Excellent. So we've refreshed our view. And now, let's take a look. Moment of truth. Do we have 5 million more bags in Boston? Yes, we do. We were at 21.495 and now we're at 26.495. Just like that, our materialized view automatically refreshed and took into account that additional row of data inserted into the base table. And we were able to quickly see how I can recompute.Now, we've done a lot of analysis and exploration of our data in the lake house, right from within the Databricks SQL UI. But of course, we know that it's just as important to work within our favorite third party BI tools. And we also understand that's where a lot of analysts are spending most of their day. So, why don't we go ahead and switch over to Tableau?Now that I'm over here in Tableau and I've established a connection using my personal access token and some details on that serverless warehouse. You can see here that I'm able to view data from that exact same lake house data we've been interacting with to date. So I can take a look at the acme_avo catalog, the default database, and if I want to take a look at what tables are going to come back, I can go ahead and drag avocado over here, one we're super familiar with. And immediately, I'll be able to take a look and confirm the fields are what I expect, as well as get a quick preview of some of the data. You'll notice that this is a live connection, but of course, extract is also always an option.You know, what I'm now going to be able to do is go ahead and let's just ask and answer a question of our data, like we would in Tableau any day. I might want to know the total number of bags sold each year, broken out by region. So with a few clicks and drags, I can do that. And I can see, of course, US total is going to be very large, but I'm also seeing quite a bit of sales in California, Great Lakes, Mid-South. If I wanted to understand if that held true just kind of overall, it's really trivial to remove that kind of date metric from the columns field. And you can see here that yes, California, Great Lakes, Los Angeles, Mid-South - Mid-South, more of a region - all stay very popular in terms of total number of bags sold overall. And also if I bring back, broken out by different year.Well, thank you all for joining me today. I really appreciate you giving me the opportunity to show you a little bit of what's new in Databricks SQL, and why the best data warehouse is in fact a lake house. Now, I'm going to actually hand the reins over to one of our customers so that we can hear a real life story. This is going to be how GSK uses Databricks SQL and Unity Catalog to scale data analysis through a harmonized mesh ecosystem. Take it away.– [Haley] Thanks so much. I'm so happy to be with all of you here today. At GSK, we treat and prevent disease with our vaccines, specialty medicines and general medicines. We focus on the science of the immune system, human genetics and advanced technologies in four core therapeutic areas, to have major impact of health at scale. We're ambitious for our patients, we're accountable for impact, and we do the right thing, because 2.5 billion people around the world count on us to get it right.So it's important to note that the advanced analytics journey at GSK did not start with the enterprise data and analytics platform. There are some amazing data experts all around the company. We have folks in vaccines developing end-to-end in the vaccine space, machine learning algorithms. In commercial, they're developing insights for the commercial line of business. In the supply chain, they're optimizing the manufacturing chains and the supply chains to decrease cost.And each one of these groups, if you dive into these, they're even more siloed. Which, it's great that they're creating insights, but as I mentioned, they're silos. There's opportunity for data redundancy. And most importantly, from my perspective, there's this amazing story to tell about a treatment going from the research and development phase, through the supply chain, actually getting manufactured, all the way to commercial and being sold to customers. And with these different groups operating in silos, that story gets lost.Additionally, there are inconsistent governance patterns. Chargeback is different between groups. They may not be using the right tools for the problem. And all of this together inspired the enterprise data and analytics platform at GSK, which is called Code Orange. We have a fit for purpose ingestion tool catalog. We have an enterprise data lake that is a harmonized mesh architecture, which we'll be getting into shortly, that has a complex and audit-ready security model. We have core and common data tools that are easy for different customers to use for things like data modeling, data cataloging, and data preparation. And we also have analytics suites for advanced and traditional data and analytics, with things like, in the future, data science workbenches and analytics workbenches as well.So how did we do this? We built Code Orange, as I mentioned, the brand name for our data platform, on a harmonized mesh ecosystem, where we see the respective business units, before, as different nodes.– [Haley] ... see the respective business units before, as different nodes. I really scaled down this image so I wasn't overloading you with each component on the node, but it is much more robust than this and I'm totally willing to discuss this with you guys after the session if you have some more questions. So, you guys can see we leverage Databricks on the mesh core, which is where our common services are, and then on each node as well, as well as our ADLS Gen Two storage, which provides a lake on each business unit's node or business function's node. Note that we have connections both networking and virtualization between each of the nodes to enable that data sharing, which I'm going to get into that a bit more as we get to the end of this session. During this talk though, because of the scope of this session, I really want to focus on how we're leveraging Databricks SQL.Additionally, it's important to note that we're also using Databricks as a key component in our Fit for Purpose Ingestion catalog, and as part of our data analytics workbench in all of those tools. Focusing on Databricks SQL, I really want to focus on three key problems that it solved for us as we released it to the supply chain business unit last quarter. The first one is the ease of use. I know this may seem silly to some of you, especially in startups or smaller organizations, but in a 90,000-person company, it is so difficult to download software. Some people don't have the right permissions to download any query-type tools that they need from the internet, and they have to go through a complex governance process just to be able to hit download. But, since this is browser-based, it really increases time to insight by making it accessible to anyone who already has the right access.Additionally, our super users who are using Databricks as part of that Fit for Purpose catalog or is part of our analytics suite, have everything they need in one place so they can easily go back and forth between their Python notebooks and then query whatever they just transformed and make sure it looks accurate using these query capabilities, which is really easy to use for those power users. Next, I want to talk about cost. This is not Databricks. The screenshot is actually a cloud help, which is how we do fin-ops at GSK. We have saved a lot of money by using Databricks. In the first iteration of the platform. One of our major spend was actually on traditional data warehouses because as you guys know, they're always running, always incurring cost. And one thing that Databricks SQL did for my customers is it put financial power into their hands as a user of the platform. They're able to optimize when the queries run. They can make it so it's basically an on-demand system that they can actually control rather than something that's just always incurring cost.And finally, connectivity. I know this is going to be mentioned a lot earlier today and a lot more, but there are many, many, many different ways to connect to downstream analytics tools. Just by going to Power BI, for instance, you can click a couple buttons and you can figure out 20 different ways to connect from ADLS to a visualization tool. But, whenever you get into complex security models, some of these ways just don't work, and in a big organization, this leads to lots of support tickets, customer complaints, and similar problems. Databricks provides that easy connection to the downstream applications for traditional reporting insights and most importantly to me as someone who ends up writing a lot of documentation for the customers, it's self-documenting. People can read and understand how to do it without any jumping through hoops.What we used to do before this was an option, we used personal access tokens from the data science and engineering workspace, and we have some pretty complex documentation in place to allow them to connect to Power BI using personal access tokens, and so this is a game changer as was demoed in a supply chain demo at the end of Q2 last year. My customers are demoing this to their customers because it's so powerful. I want to take a second to look at the future. As we all heard, Unity Catalog is GA, which is so exciting to me, and let me tell you why. Looking at the architecture diagram from before, we see that each of these nodes are connected. Well, also notice that we have workspaces in each environment as well. Each of these different business functions have an analytics and ETL workspace per environment. But what happens whenever you're trying to access data in each one?Today we have two options. One, the customer, so the person in the business has to open up, let's say the supply chain's workspace, but then also the enterprise's workspace and actually not see their data in a unified view. We also have another approach where we use service principles, but what that does is it triggers a secondary governance process. So, whenever someone has already been granted access to the data because this isn't user-based, it's still another process of them having to go through and jump through hoops to get access to that data. What Unity Catalog is going to do for us is it will allow in a single workspace, the customers to have access to all of the data they have access to, and it will actually allow that sharing experience to greatly improve across the company.Whenever I started talking to you all today, I mentioned the story of the treatment all throughout the process and how it touches every single business function to get that global view of what it means to bring a treatment to market, and with Unity Catalog, I think that we're really going to be able to accomplish this. Thank you all for the time. Next up, I would like to introduce Franco who's going to be going through some best practices with you all.– [Franco Patano] Thanks, Haley. It's always great hearing from our customers. Hey, everyone, my name is Franco Patano and I am a Product Specialist at Databricks. I focus on Databricks SQL and I'm here to give you some tips from the field. First off, let's talk about cloud data warehouses. We often see in the field how organizations are using cloud data warehouses, and unfortunately, they're trying to use it as a modern data platform, but they're kind of stuck in this situation where they're realizing it's not a modern data platform and they're trying to look for an alternate solution. Often what we find is they're doing ELT on a cloud data warehouse because we used to do that on-prem. You had the choice, but there were these tools, ETL tools like Informatica, Talend, and Ab Initio where you would build your ETL outside of the warehouse because the warehouse was considered premium compute.You didn't want to waste your premium compute on ETL tasks, but you could still do it because you bought the servers, you bought the software, you already licensed it, and you owned the network and you had your data center. Why couldn't you just do it? You were able to, so some people did, and what we find is that on-prem, this was fine, but when you lift and shift that into the cloud, all of a sudden costs get out of control because in the cloud you're metered on everything. Those workloads while inefficient, but they did what they were supposed to do on-prem, they end up costing a lot of money in the cloud because they're built up of really complex multi-stage processes to get that data to be able to fit into a table, from a file into a table. And most data warehouses, by the way, they might have things called streams, they don't have real streaming because they measure streaming in minutes. Real streaming is measured in seconds and that's a big difference.And then also you can't really load crucial data for data science into a data warehouse. Things like images or video or audio, they don't fit. Often they might have some support for semi-structured data with things called a variant type where you could just shove anything in there. Often this is not very efficient and it's very complex because it involves you making numerous copies of the data before you can actually do something with it. We often find that these things are not optimized for data engineering, but the one thing that they're great at is doing BI and reporting through typical tools like Power BI, Tableau, Looker, DBT, and they have a really good system there. But the thing that it isn't good for was data science. And normally what we find with organizations that try to do this is they have to copy data back out of the warehouse, back onto a data lake in order to use these ML or data science tools, and that is very expensive to do. And often these tooling that they're all disparate and disconnected from the main data stack, and this is fraught with friction.Often customers come to us and they're like, "Can't we just have one system to do everything from BI to AI? And that's what we think Databricks is, and I'm going to tell you, you could do modern data warehousing on Databricks and then you can enable all the other use cases for data science and machine learning. There is a real stream processing engine and Databricks called Spark Structured Streaming that can read directly from these event buses and process streams at the speed of thought. We also have a large amount of partners that have real-time CDC tools that connect to on-prem systems or cloud systems to transport or ETL that data out onto your data lake. And essentially, the foundation of the Lakehouse is Delta Lake because Delta is really just an open protocol. It's an open-source protocol to deal with your tables on a data lake, and we'll talk a little bit more about the benefits of that. But essentially, Delta is what enables you to have a table on a data lake, and that's how you can do modern data warehousing on Lakehouse.And then once you have one construct that can be read as a table or files, you can service your BI or analytics needs with Databricks SQL with those tables or your data science and machine learning can be leveraged using the files because it's all the same construct on your data lake in the open source format, Delta Lake. Now, often people come to us and they're like, "Well, how do you actually process the data?" Data warehousing, they had this concept of raw staging presentation layer, and really it's the same concept in Delta or Lakehouse. It's bronze, silver, and gold. These are very similar terms. Essentially, bronze or raw is where you land your data that you got from the source and you want this for lineage purposes, and often this could be considered right optimized, just get the data and land it there. If you're getting data from a vendor, this is where it lands on a cloud Object Store. If you're getting data from streams, this is where you would land it from your streaming source onto Object Store.And then just like what was made popular with Bill Inman and data warehousing, you want an integration layer, you want a layer that all of your data can be integrated together, but the data needs to be cleaned up and there needs to be data enrichment, you need to take care of bad dates or nulls or maybe common business logic. And you organize this data by key domains, and this is the silver, the staging layer. You can employ normal forms here. We'll talk about data modeling a little bit, but this could also be considered like an ODS or an operational data store. And then you need to actually deliver solutions to your business and this is where you're going to lead the gold layer of the presentation layer, and this is where solutions are built, and here you can build your data models. We recommend Star Schema, but this is your read-optimized layer. This is where your BI tools and your analytics are going to connect to in order to get that data to serve it up to your users.And you can build all types of things here like sandboxes or data meshes and even feature stores or data science areas. So, let's talk about dimensional modeling. People often ask, "Can you do dimensional modeling on Lakehouse?" Absolutely. Delta Lake as a table format on a data lake. Essentially you can do all those same things, but we often get asked, "What do you recommend?" Typically, all of the common benchmarks that exist today are something of a Star Schema like the data warehousing benchmarks. That's what we know works really, really well because it's what we benchmark. That doesn't mean that other modeling techniques don't work or they're not performant. This is the one that we know it is. And some best practices are don't over normalize. Use conform dimensions. Definitely use surrogate keys for your joins and your optimizations. You can do solely changing dimensions for time tracking. Delta Live Tables has a great API called Apply Changes Into which makes this really simple, and then you can use materialized views to scale out your analytics MBI so that you have great performance at runtime.What are some common things or some steps to success for dimensional modeling on Databricks or Lakehouse? Essentially, you should leverage Delta Live Tables if you don't have another choice for ETL on Databricks and Databricks SQL together. Essentially you get efficient ETL and ELT with query serving. We'll talk a little bit about how we benchmark this in a couple slides, but essentially you have three things to remember, Optimize, Z-order, and Analyze. Delta Live Tables currently handles optimizing Z-order for you. It scheduled jobs for it so that the tables are kept in sync. And then Analyze collects statistics for the cost-based optimizer to pick the right plan and to do other optimizations during runtime. We're trying to make these tools as simple as possible for our users, but those are just some things you can do just to stay ahead of the game.Sometimes people ask, "What about me that data mesh architecture, can you do data mesh on Lakehouse?" Absolutely. What we find is that data mesh is more of a practice and less of a technology or a tool set, and you can essentially use the bronze layer to put your source systems where you get your sources from. In your integration layer, your silver layer, you can build your product domains. And then in your gold layer you can build all of your derived data products, and this is where you can enable your self-service layer so that you could have all of your data workers leveraging your derived data products so that they can provide value to your business.And then let's talk about a Lakehouse mesh governance model. If you have a self-service layer, you need a way in order to govern that, and like we learned about Unity, this is how we can enable that on the Lakehouse. When you have enterprise data sources where maybe only a few people have modify access, but everyone can have read, you can enable access controls in order to do that. Or if you want to give business units or departments the ability to manage their own data, you can set different permissions on that database itself. And then if you want to enable users to bring their own data sets to enrich the data even further, you can apply different access controls to that layer as well, and this keeps everyone in their well gardens in order to manage proper data governance on the Lakehouse.Next, people often ask, "How can I get good workload management out of Databricks SQL? How do I know I'm getting the best cost optimizations over the day? Databricks SQL warehouses on the backend just kind of manages for you, and I'll just explain how that works. Essentially, the first thing you have to decide is a size. If you don't know, I often recommend medium and it works out great. This is the first level with most cloud instances where you get a full network bandwidth and you really want that in order to have fast performance. And then what you want to do is set your scaling. You can set scale min one, 10 max. Don't get concerned with looking at the price tag on 10.Basically, as you see here over the course of the business day, as users are coming in the morning, Databricks SQL warehouse is scaling up to handle all of those users' demands, and then as they leave for lunch, it scales down to make sure you're not paying for idle compute, and then when those users come back in the afternoon, it slowly scales up to make sure it's handling their demand appropriately. Behind the scenes, Databricks SQL is looking out for your costs to make sure you're getting the best price-performance in the cloud. And that's not all because I'm here to tell you the best warehouse is a Lakehouse and we run a ton of benchmarks to prove it. But I want to talk to you today about two. You've probably already heard of TPCDS.We made a big splash in the news about this last year, but to me, that's only one half of the story because TPCDS is the industry-standard benchmark for data warehouses, for query serving. How fast can you do these 99 queries and report on the price performance of that? But how does the data get into a warehouse? It has to be ETL or ETL teed. And that's a completely other half of the equation. So, there's another industry-standard benchmark for warehouses for ETL, which is called TPCDI or data integration. And while this benchmark is very old, and we haven't submitted an official submission yet because the price performance metric accounts for buying hardware and software and the depreciation over three years, we just don't do that stuff in the cloud anymore. But, generally, every data researcher agrees on two things, what matters most in benchmarking, your throughput, and your price performance.We're here to tell you that we ran those benchmarks too, and what we found is that with Lakehouse on Delta Lake and Delta Live Tables, we can process 23 million rows per second. That's amazing performance. Not only that, but we are able to process a billion rows for less than a dollar. That's just unheard of in the cloud today. And this is the value that we're trying to provide to you, a great data platform that does everything like data warehousing, ETL, and query serving, and everything machine learning and AI, and you can get great price performance on all of that. And you can see here that our weekly active users is growing very, very fast because our users and their administrators and the people that pay the bill all agree that Lakehouse is the best warehouse.You might be thinking, this sounds great, but why do people typically choose Lakehouse? Essentially, it breaks down to the economics of Lakehouse architecture because you don't have to make multiple copies of the data. It's one copy in Delta Lake. There's no need to have those copies, and so your costs for data storage go down, and this is great. Most people agree we don't want redundant copies of our data sitting around for data security reasons and cost reasons. And then because you have a single platform for all of your data and AI needs, you can reduce some of your redundancy in the cloud. Because with migrations, we often find customers have on-prem tools and cloud tools, and there are a whole lot of overlap between these things. And then since all of your data is in one place and all of your tooling is on one platform, it simplifies your path to production and it has never been clearer than with the lakehouse.And we generally take a technical approach to migrations where we do an architecture infrastructure assessment. Definitely reach out to our Databricks, we have a field team that is excellent at this. And then we help you plan your migration, moving your data from wherever it sits to Delta Lake. And then we migrate your pipelines. If you want to use DLT or if you want to use one of our partners like Fivetran, Rivery, or any others, and then we migrate your pipelines. And then once we've migrated the data on the pipelines, we can point all of your analytics and BI tools to the new Lakehouse in the cloud, and now you're free to explore all those new use cases with data science and machine learning on Lakehouse. If you need help accelerating your migration, whether you need tools like Arcion, Fivetran, or Click, or you need system integrators to help you accelerate your migration, there's partners like Accenture, Deloitte, and Cognizant ready to help you out across all three clouds, and we even have Databricks' migration specialists to help you package up different solutions that you need.When you meet with Databricks, we can help you with our solutions accelerators, which are pre-packaged solutions that you can copy and deploy in your organization. We also have professional services and personalized workshops to help you accelerate your success. And let's take a look at Databricks Academy. This is a great resource for all of your data workers across your organization to be able to get the training they need to be effective in their role. We have an array of academy plan learning plans to help all of the different types of workers that you have working with your data in your organization from a Lakehouse fundamental learning plan to data analysis or even an advanced SQL for Databricks SQL. There's all different types of learning plans for everyone to learn how to deliver value to their business.And there's even different types of courses. There's free self-paced courses. We have workshops and instructor-led, even certifications if you want to get certified. And here's a look at some of our tracks we have for the data analyst, the data engineer, machine learning practitioner, or the Apache Spark developer, platform administrator, and the architect. Thank you for joining us today. Have a great day.– [Jillian] All right, Franco, fantastic. Thank you very much to all of our presenters and all of our speakers and attendees for joining us today. It was an awesome session. We had so many questions coming through, so it's now time to kick off our live Q and A with all of our presenters. My name is Jillian [inaudible 01:23:39]. I am going to be your moderator for today. First off, we had lots of questions about how to qualify for the voucher. Just to recap, you will need to answer the feedback survey and take the fundamental course, which is about two hours. The links are in the resources section of this webinar platform. We will also send them to you via email after this webinar, and you will get links to the presentation, the recording as well, of course. We hope you enjoyed it so far. Let's just start with a few questions, starting with some intro basic questions about that. Just to recap, what can Databricks provide that others cannot?– [Franco Patano] Is it open to anyone to take that?– [Jillian] Go ahead, Franco.– [Franco Patano] Hello, everyone. That's a great question. Most customers, they have a lot of choice in the cloud today, and you have cloud providers providing the solutions that are built for their cloud, and you have some vendors that are providing solutions that are built among the clouds. And typically, when I think of this type of question, it's about the confusion of choice. You have all of these different choices that are thrown at you as a consumer of these cloud products, and most of the time, you just need to get your work done. You need to build something, you need to build a platform, and you need to enable all of your users. What I think is something that Databricks provides that no one else does is essentially a unified analytics platform. Where can I build one solution that can service all of my users in my organization? And that's what I think Databricks provides that no one else does.– [Jillian] Absolutely. We're a unified platform. Does that mean we provide all components required for batch processing, training, data warehousing, all that data science, machine learning?– [Franco Patano] Yes. Yeah, that means when we say that we were looking at the architecture earlier that Sean was showing that all of those use cases are supported.– [Jillian] Fantastic.– [Franco Patano] You can have one platform to do your ETL, data engineering, machine learning, business intelligence with warehouse-serving all on one copy of your data at Delta Lake.– [Jillian] Right. And that's how the Lakehouse is better than either the traditional data lake on one side or the traditional data warehouse on the other. Is Databricks available for on-prem deployment?– [Franco Patano] That is a great question. Databricks is only available on the public clouds, so AWS, Azure, and GCP. But, that doesn't mean that we don't work in a hybrid type of architecture. We have a lot of customers that still have pockets of things on-prem, or they're going through their cloud migration journey. And with some of the features that were announced today, especially Unity and Query Federation, you essentially can connect to your on-prem networks to catalog all your data, and if you want to federate, you have that option. Or if you want to ETL that data into the cloud, you have that option as well.– [Jillian] That's awesome. Databricks is more like a SaaS model, but do customers have to manage the infrastructure? Do we do it? How much work is needed?– [Franco Patano] That is another great question. Databricks offers options in this category. Traditionally how Databricks works is we have a SaaS product that can be considered the UI, the notebooks, the cluster managers, the warehouse managers, but you can actually deploy the infrastructure in your account. It's similar to IAS, but not really, but it's really more of a SaaS product. And then we have our service offering, which is completely managed for you.– [Jillian] Awesome. We have some questions also about how to get started. Some of the attendees ask, do we have any advice about skills that somebody needs to get started on its journey with Databricks? Is it mandatory to know Python or any other language specifically for Databricks? What's your advice on that?– [Franco Patano] Yeah, I have organizations asking all the time and it usually comes out of, "We don't know Spark, how can we use Databricks? Or I don't know Scala, how can we use Databricks?" Or something along that effect. And actually, you don't have to know Spark to use Databricks, you don't even have to know Scala. Actually, the most popular languages on our platform are SQL and Python. SQL is considered lingua franca of the data professionals all over the world. It's the most common language out there. Python is a close second, but we even offer low-code tools on Databricks now. We just announced Bamboolib that basically is kind of like an Excel macro recorder for Databricks, or essentially for Python. Even if you don't know coding, you don't have to in Databricks, you can use Bamboolib and learn on your own, and you can even go to academy.databricks.com to help you on your journey.– [Jillian] Yeah. Awesome. Thanks, Franco. I see we have Mate and Sean on the line as well. I hope the audio is functioning now. I just want to go back a little bit to the origins of the Lakehouse and maybe perhaps talk a little bit about Delta as well. We had lots of questions around Delta. Can somebody just summarize how a data lake is different from Delta Lake?– [Matei Zaharia] Yeah. I can take that.– [Jillian] Yeah.– [Matei Zaharia] Oh.– [Jillian] Go for it, Mate.– [Matei Zaharia] Okay. Yeah. Traditionally a data lake was basically just a low-cost file-based storage where you could store large amounts of data as they're being produced. You don't have to throw anything away, and then you can query them and organize them later, and then extract more valuable curated data sets...– [Matei Zaharia] ... Stacked, more valuable, curated data sets from them. Delta Lake and Lakehouse are trying to take the same type of infrastructure, but just make it also good for actually working with the curated data for data warehousing. Basically, you get the same kind of low cost, large scalability and easy ingest that you have with the data lake, but then you can have tables that get really good performance. You can have different sort of statistics and indexes about them, and you can also have transactions and version control and other management features right on there, so basically eliminating the need for a second system to do those advanced data management things.– [Jillian] Very cool. So is that fair to say that, essentially, we can think about a lake house as your existing data lake with Delta Lake on top to help manage the data? Then all of the tools that we provide for ETL, data warehousing, streaming, DSML, et cetera, et cetera, and cataloging?– [Matei Zaharia] Yeah. And basically just a additional kind functionality on top of the data lake. A key part of that is Delta as the data management and storage layer that supports all those management features.– [Jillian] Awesome. So we're hearing about other formats available. So could you compare Delta with other open source table formats like Iceberg and Murphy?– [Matei Zaharia] Yeah, so definitely this Lakehouse model is becoming quite popular and there are other projects that do similar things, that update your data lake to make it easier to manage.Out of the ones today, I think Delta is the one that's been used the most widely. It's used at basically thousands of companies already, partly because Databricks and Azure and other vendors have built products based on it.These formats are quickly evolving. But at least today, I think in Delta you'll find a lot more emphasis on scale. It works very well with very large tables and on performance for queries of any size basically. We've tried to really optimize performance because we wanted to take on the data warehousing use case head on. Some of the other projects started more in just the data lake world and these kind of batch workloads. But the formats are all evolving over time.– [Jillian] Yeah. Awesome. Okay, so that's for Delta.By the way, as we go through questions, I'll ask attendee, feel free to keep asking questions. We're watching as the Q&A panel, they're coming very fast, but we'll do our best to see what's coming and maybe take some additional as we go.So about Unity Catalog, let say, we've heard from many customers that it truly is a game changer. So could you just recap why customers should adopt Unity Catalog for their data governance needs on the Databricks platform and what they love about it the most?– [Matei Zaharia] Yeah, I mean, I think it just makes it a lot easier to manage your data, to manage who has access to it, to audit it, and to set up advanced governance around it. We've seen a lot of people adopted just to simplify that process. Once you do that, the interface is very similar to managing permissions in a SQL database, so just anyone who has the knowledge of how to do that from a system like, say, BigQuery or Edge, Shift or anything like that, can now manage the entire lake house and all the data sets they already have in there and give people fine grain access to pieces of it.But it is also something where that we're building a lot of great new functionality in. For example, the Lineage feature is one of the favorite features among early users. It just simplifies a lot of operations when working with data, just to know who's using this data set, when did they last use it, and make sure you don't break anything as you change stuff in there. And we have quite a few other great features just coming on top of that, as well.– [Jillian] Anything you'd like to share?– [Matei Zaharia] I'm excited about the tags and attribute-based access control based on those. Basically, instead of having to set permissions on individual resources, like individual tables, you can set a tag. You can create a tag, for example, this is, say, credit card data and apply that to lots of columns and then just create a rule that applies to all of them that's based on the tag. Then you can just have your data stewards who just figure out which data to tag, or you can do it automatically using background scanning. Then everything has those tags and all the rules apply to them.– [Jillian] Yeah, that's very cool.We had some questions about integration to Unity Catalog. I know you touched on this in your presentation. But again, just to recap, there was so much content today. Unity Catalog works across multiple workspaces. It works with your active directory, for example, works with listing solutions like Collibra, Informatica, correct?– [Matei Zaharia] Yep. Yeah, we've been working with all the major product vendors in the data space to make it work well. So all the BI tools, all the ETL products, and also catalogs like Collibra and Elation that let you catalog everything in your enterprise.– [Jillian] Very cool.You also talked about data sharing. So could you talk about some of the key benefits that customers would realize from using Delta sharing on Databricks?– [Matei Zaharia] Yeah. I mean, it basically makes it easy to share data in your organization with other groups, either in your company or in others. One of the key things with it is the other groups don't need to be running the same platform as you. For example, they don't need to be on database. You can share data with someone who is just doing analysis, say, on a VM, doing data science and Python, they can just directly connect from Python to it. You can share it with someone who's using Power BI; they can just connect in the Power BI user interface. They don't need to set up a data warehouse or anything like that to put the data into. You can share it with anyone that's running Spark in any form or Databricks SQL or other tools.When we talk to users who need to exchange data a lot, the top problem was how hard it is to share to different platforms than what you have, because every platform is trying to use it as a lock-in mechanism to encourage more people to use the same platform because of the convenient sharing. We think in most organizations, you're never going to just replace all that you have in every corner of it and have a single computing engine deployed everywhere. So we'd rather work with everything that's already out there and just make it very convenient for people.– [Jillian] All right, Franco, fantastic. Thank you very much to all of our presenters and all of our speakers and attendees for joining us today. It was an awesome session. We had so many questions coming through, so it's now time to kick off our live Q and A with all of our presenters. My name is Jillian [inaudible 01:23:39]. I am going to be your moderator for today. First off, we had lots of questions about how to qualify for the voucher. Just to recap, you will need to answer the feedback survey and take the fundamental course, which is about two hours. The links are in the resources section of this webinar platform. We will also send them to you via email after this webinar, and you will get links to the presentation, the recording as well, of course. We hope you enjoyed it so far. Let's just start with a few questions, starting with some intro basic questions about that. Just to recap, what can Databricks provide that others cannot?– [Franco Patano] Is it open to anyone to take that?– [Jillian] Go ahead, Franco.– [Franco Patano] Hello, everyone. That's a great question. Most customers, they have a lot of choice in the cloud today, and you have cloud providers providing the solutions that are built for their cloud, and you have some vendors that are providing solutions that are built among the clouds. And typically, when I think of this type of question, it's about the confusion of choice. You have all of these different choices that are thrown at you as a consumer of these cloud products, and most of the time, you just need to get your work done. You need to build something, you need to build a platform, and you need to enable all of your users. What I think is something that Databricks provides that no one else does is essentially a unified analytics platform. Where can I build one solution that can service all of my users in my organization? And that's what I think Databricks provides that no one else does.– [Jillian] Absolutely. We're a unified platform. Does that mean we provide all components required for batch processing, training, data warehousing, all that data science, machine learning?– [Franco Patano] Yes. Yeah, that means when we say that we were looking at the architecture earlier that Sean was showing that all of those use cases are supported.– [Jillian] Fantastic.– [Franco Patano] You can have one platform to do your ETL, data engineering, machine learning, business intelligence with warehouse-serving all on one copy of your data at Delta Lake.– [Jillian] Right. And that's how the Lakehouse is better than either the traditional data lake on one side or the traditional data warehouse on the other. Is Databricks available for on-prem deployment?– [Franco Patano] That is a great question. Databricks is only available on the public clouds, so AWS, Azure, and GCP. But, that doesn't mean that we don't work in a hybrid type of architecture. We have a lot of customers that still have pockets of things on-prem, or they're going through their cloud migration journey. And with some of the features that were announced today, especially Unity and Query Federation, you essentially can connect to your on-prem networks to catalog all your data, and if you want to federate, you have that option. Or if you want to ETL that data into the cloud, you have that option as well.– [Jillian] That's awesome. Databricks is more like a SaaS model, but do customers have to manage the infrastructure? Do we do it? How much work is needed?– [Franco Patano] That is another great question. Databricks offers options in this category. Traditionally how Databricks works is we have a SaaS product that can be considered the UI, the notebooks, the cluster managers, the warehouse managers, but you can actually deploy the infrastructure in your account. It's similar to IAS, but not really, but it's really more of a SaaS product. And then we have our service offering, which is completely managed for you.– [Jillian] Awesome. We have some questions also about how to get started. Some of the attendees ask, do we have any advice about skills that somebody needs to get started on its journey with Databricks? Is it mandatory to know Python or any other language specifically for Databricks? What's your advice on that?– [Franco Patano] Yeah, I have organizations asking all the time and it usually comes out of, "We don't know Spark, how can we use Databricks? Or I don't know Scala, how can we use Databricks?" Or something along that effect. And actually, you don't have to know Spark to use Databricks, you don't even have to know Scala. Actually, the most popular languages on our platform are SQL and Python. SQL is considered lingua franca of the data professionals all over the world. It's the most common language out there. Python is a close second, but we even offer low-code tools on Databricks now. We just announced Bamboolib that basically is kind of like an Excel macro recorder for Databricks, or essentially for Python. Even if you don't know coding, you don't have to in Databricks, you can use Bamboolib and learn on your own, and you can even go to academy.databricks.com to help you on your journey.– [Jillian] Yeah. Awesome. Thanks, Franco. I see we have Mate and Sean on the line as well. I hope the audio is functioning now. I just want to go back a little bit to the origins of the Lakehouse and maybe perhaps talk a little bit about Delta as well. We had lots of questions around Delta. Can somebody just summarize how a data lake is different from Delta Lake?– [Matei Zaharia] Yeah. I can take that.– [Jillian] Yeah.– [Matei Zaharia] Oh.– [Jillian] Go for it, Mate.– [Matei Zaharia] Okay. Yeah. Traditionally a data lake was basically just a low-cost file-based storage where you could store large amounts of data as they're being produced. You don't have to throw anything away, and then you can query them and organize them later, and then extract more valuable curated data sets...– [Matei Zaharia] ... Stacked, more valuable, curated data sets from them. Delta Lake and Lakehouse are trying to take the same type of infrastructure, but just make it also good for actually working with the curated data for data warehousing. Basically, you get the same kind of low cost, large scalability and easy ingest that you have with the data lake, but then you can have tables that get really good performance. You can have different sort of statistics and indexes about them, and you can also have transactions and version control and other management features right on there, so basically eliminating the need for a second system to do those advanced data management things.– [Jillian] Very cool. So is that fair to say that, essentially, we can think about a lake house as your existing data lake with Delta Lake on top to help manage the data? Then all of the tools that we provide for ETL, data warehousing, streaming, DSML, et cetera, et cetera, and cataloging?– [Matei Zaharia] Yeah. And basically just a additional kind functionality on top of the data lake. A key part of that is Delta as the data management and storage layer that supports all those management features.– [Jillian] Awesome. So we're hearing about other formats available. So could you compare Delta with other open source table formats like Iceberg and Murphy?– [Matei Zaharia] Yeah, so definitely this Lakehouse model is becoming quite popular and there are other projects that do similar things, that update your data lake to make it easier to manage.Out of the ones today, I think Delta is the one that's been used the most widely. It's used at basically thousands of companies already, partly because Databricks and Azure and other vendors have built products based on it.These formats are quickly evolving. But at least today, I think in Delta you'll find a lot more emphasis on scale. It works very well with very large tables and on performance for queries of any size basically. We've tried to really optimize performance because we wanted to take on the data warehousing use case head on. Some of the other projects started more in just the data lake world and these kind of batch workloads. But the formats are all evolving over time.– [Jillian] Yeah. Awesome. Okay, so that's for Delta.By the way, as we go through questions, I'll ask attendee, feel free to keep asking questions. We're watching as the Q&A panel, they're coming very fast, but we'll do our best to see what's coming and maybe take some additional as we go.So about Unity Catalog, let say, we've heard from many customers that it truly is a game changer. So could you just recap why customers should adopt Unity Catalog for their data governance needs on the Databricks platform and what they love about it the most?– [Matei Zaharia] Yeah, I mean, I think it just makes it a lot easier to manage your data, to manage who has access to it, to audit it, and to set up advanced governance around it. We've seen a lot of people adopted just to simplify that process. Once you do that, the interface is very similar to managing permissions in a SQL database, so just anyone who has the knowledge of how to do that from a system like, say, BigQuery or Edge, Shift or anything like that, can now manage the entire lake house and all the data sets they already have in there and give people fine grain access to pieces of it.But it is also something where that we're building a lot of great new functionality in. For example, the Lineage feature is one of the favorite features among early users. It just simplifies a lot of operations when working with data, just to know who's using this data set, when did they last use it, and make sure you don't break anything as you change stuff in there. And we have quite a few other great features just coming on top of that, as well.– [Jillian] Anything you'd like to share?– [Matei Zaharia] I'm excited about the tags and attribute-based access control based on those. Basically, instead of having to set permissions on individual resources, like individual tables, you can set a tag. You can create a tag, for example, this is, say, credit card data and apply that to lots of columns and then just create a rule that applies to all of them that's based on the tag. Then you can just have your data stewards who just figure out which data to tag, or you can do it automatically using background scanning. Then everything has those tags and all the rules apply to them.– [Jillian] Yeah, that's very cool.We had some questions about integration to Unity Catalog. I know you touched on this in your presentation. But again, just to recap, there was so much content today. Unity Catalog works across multiple workspaces. It works with your active directory, for example, works with listing solutions like Collibra, Informatica, correct?– [Matei Zaharia] Yep. Yeah, we've been working with all the major product vendors in the data space to make it work well. So all the BI tools, all the ETL products, and also catalogs like Collibra and Elation that let you catalog everything in your enterprise.– [Jillian] Very cool.You also talked about data sharing. So could you talk about some of the key benefits that customers would realize from using Delta sharing on Databricks?– [Matei Zaharia] Yeah. I mean, it basically makes it easy to share data in your organization with other groups, either in your company or in others. One of the key things with it is the other groups don't need to be running the same platform as you. For example, they don't need to be on database. You can share data with someone who is just doing analysis, say, on a VM, doing data science and Python, they can just directly connect from Python to it. You can share it with someone who's using Power BI; they can just connect in the Power BI user interface. They don't need to set up a data warehouse or anything like that to put the data into. You can share it with anyone that's running Spark in any form or Databricks SQL or other tools.When we talk to users who need to exchange data a lot, the top problem was how hard it is to share to different platforms than what you have, because every platform is trying to use it as a lock-in mechanism to encourage more people to use the same platform because of the convenient sharing. We think in most organizations, you're never going to just replace all that you have in every corner of it and have a single computing engine deployed everywhere. So we'd rather work with everything that's already out there and just make it very convenient for people.– [Jillian] Right. Plus, now based on this technology Delta sharing, we also provide solutions like marketplace and clean rooms. Correct?– [Matei Zaharia] Yeah, and these will have the same benefits, basically, that you can connect to them from any computing platform and you can actually exchange and collaborate on data that way.– [Jillian] Yeah, yeah. Very cool.Is there anything else you would like to on Unity Catalog and Delta sharing? I have more questions on ingest, DTL, DB SQL, things like that.– [Matei Zaharia] Yeah, they're both generally available, as I said in the webinar. So yeah, we're very excited to see people try them out and hear feedback.– [Jillian] Absolutely.So switching here a little bit. So we've talked about Delta, we've talked about Unity Catalog and data governance and data sharing.We had lots of questions around ingest in ETL as we went through this webinar as well. One of the question was, "For data ingestion, should I have to use other ETL tools such as Fivetran, Data Factory, et cetera, or do we have any components provided on Databricks itself?"– [Shant Hovsepian] I can take that. Hey, everyone, it's Sean.– [Jillian] Hey, Sean.– [Shant Hovsepian] Yeah. So Fivetran and Azure Data Factory are very frequently used ingestion tools with Databricks. Fivetran works great when you have various different data sources that you want to bring in. If you're in the Azure stack, ADF is also connected to just about anything. So we have tons of users that use those together with the rest of the Databricks platform.Out of the box directly in Databricks with the workflows product and DLT, there are various different types of ingestion pipelines you can build and data sources you can choose from. Really, it's the beauty of the Lakehouse, have the flexibility to choose the tools and system that you're familiar with and that works with you. It's compatible with just about all the leading methods out there.– [Jillian] So we had some specific questions like, "Is ingestion from block storage supported, for example?"– [Shant Hovsepian] Yeah, of course. If that's Azure blob storage, yes. Then generally talking about blob storage, wherever the data is stored, yeah, it's always available in the Lakehouse.– [Jillian] Yep. "Can I import data from Kafka, like auto streaming services?"– [Shant Hovsepian] Yes. So you can use Auto Loader or the general streaming infrastructure to bring in data directly from Kafka topics. That's one of the nice things, you can have the realtime data in the Lakehouse directly with all of your data warehousing type workloads. So you don't ever have to worry about data being too old or stale. You can basically get that realtime feed right in.– [Jillian] And can we implement CDC using Auto Loader?– [Shant Hovsepian] Technically, you can. Auto Loader out of the box doesn't have CDC integration. with various data sources unless it was added recently. Franco, maybe you would know better than I would on that one?Yeah, I think it's compatible. You can, essentially, for the source that you're reading from, whether it's Postgres or some other database, if you can get the stream to come in with a wall log CDC format, then we can have Auto Loader to that.– [Jillian] Okay. So as we have this data coming in and all of these data pipelines, how do we handle the quality in Databricks?– [Shant Hovsepian] Oh, yeah. So if you've heard about Delta Live Tables, DLT, which was something that we made generally available recently during the Data AI summit this summer, that's got an amazing feature. It's called expectations, where you can essentially define what you expect your data to look like. When those expectations aren't met, as the data's going through the pipeline and transformations, you can redirect those results to an error table or an exceptions table, get an alert and get a notification onto the system. So it's automatically maintained, you just need to define your constraints.– [Jillian] Very cool. So DLT is our solution for managing your data pipeline's quality, stream data, et cetera, et cetera?– [Shant Hovsepian] Yeah. Specifically. It's expectations. It's a cool feature if look it up online on the docs, it makes it very easy to constantly monitor your data quality and then deal with anything that doesn't meet your requirements.– [Jillian] Yeah. We had a question around small, I guess, extensibility. So all these data is coming in into Delta Life Tables, Delta tables creating Databricks. Can they be consumed through APIs?– [Shant Hovsepian] Oh, yeah. So especially with the Lakehouse, the value in many times is you can get this realtime, ingested data and make it accessible to everybody in your organization. We've done a lot of work, I talked about some of the new connectors that are available, but there's essentially rest APIs that you can use to get through your data and tables and query them as well as native SDKs, everything from Java, C#, traditional data sources, Python, Go, and Node where you can actually get to the data pretty...– [Jillian] Very cool. Okay, so let's talk about Databricks SQL a little more. So we touched on serverless compute in the presentation and in the demo for Databricks SQL. So can you clarify, is it using the AWS plus Azure resources or Databricks compute itself? How does the infrastructure work?– [Shant Hovsepian] Oh, yeah. That question came up a couple of times. So Databricks SQL works in two modes.There's essentially a managed serverless version where all of the infrastructure is managed by Databricks on your behalf, so nothing spins up in your VPC in your accounts and the compute is instantly available and can scale nearly elastically based on your demand.Then there's what we call DB SQL Classic, which is something where it's essentially a set of special built VMs that Databricks deploys into your VPC, your network, and your cloud account. So it's not fully managed infrastructure, there are some systems and resources that you'll see show up in your cloud console when using Databricks, but you get the flexibility to choose from both, depending on what your use case requirements are.– [Jillian] Right. Awesome.So in one of the examples we saw in the demo with the Tableau, with the live connection, if the data doesn't change, does it still query the data every time you change the visualization or will there it be a persistent cache on the serverless connection?– [Shant Hovsepian] Oh, yeah. No, for sure. There's a lot of intelligence, and this is the beauty of Delta, is added logical, reliable ability to reason about the data in the Delta Lake so that we have what is traditionally a transaction snapshot ID equivalent... But when someone queries the same data, especially it's very 80/20, 80% of users tend to run the same queries over and over again. So caching is a huge benefit.Behind the scenes we use Delta's snapshot transaction isolation to see, "Oh, the last time this query ran, we have those results, they're saved, we can just reuse them again and not run this query all over again if we know for a fact that Unity Catalog tells us that nothing has changed with the security permissions on the data and user and that Delta tells us that underlying data hasn't been updated, so the transaction snapshot hasn't changed." In those cases, it'll immediately serve the data from the cache. It'll be super snappy.– [Jillian] Yeah, yeah. Thanks, Sean.So Miranda, it was an amazing demo. Thank you so much for putting all of this together. You showed us some very cool existing and upcoming capabilities on Databricks SQL. Python UDF and Materialized View specifically are still in private preview. Can you comment on one that'll be publicly available, maybe or what's coming next?– [Miranda] Sure. Yep, you are correct that both Python UDFs and Materialized Views are in private preview. The exact timing of a public preview availability will depend on just how we hit some of our exit criteria and the feedback we get during that private preview. So highly encourage anyone interested to sign up so that we can go ahead and get you early access and you can kind of help shape exactly what that experience looks like.Next on deck is going to be Query Federation. So that private preview has not kicked off yet, but that's coming. If there's any interest there, again expressing, letting us know now so we can get you on the list and reach out when we're closer.– [Jillian] Very cool. Thanks. Can you clarify how the DB SQL, Databricks SQL, refers from stock SQL?– [Shant Hovsepian] Oh, I can take that. First and foremost, Databricks SQL is a whole service. It's a platform for your data warehousing needs. Databricks SQL is ANSI standard SQL compliance and it's built as a modern MPP architecture.Spark's SQL is a part of the Apache Spark project and it's essentially the expression layer that you use to define SQL transformations, SQL expressions while you're working with Spark. You can embed it in your Python, you can use it with the rest of the Spark API. It's not quite ANSI SQL compliance at all, and it's not a full hosted, managed service to run all of your workloads. It's really just one of the parts of the general Spark project.– [Jillian] Yeah, they're very, very different. But we had a few questions about this, so wanted to clarify.I loved one of the comment we had throughout the presentation. So Nathalie said, "Okay, so I can see you have priority with other cloud data warehousing, right? Data masking, materialized views, [inaudible 01:49:18], time travel. So how is Databricks SQL different from a data warehouse? How's Breakout different from a data warehouse?" Could you clarify that again?– [Shant Hovsepian] Oh, this is a good one. Who wants to take this? I could do it, too.– [Jillian] Well, Franco, I know that this is a question a lot of your customers are asking you, as well. He was put on mute.– [Shant Hovsepian] Well, okay. So I'll take first pass, that's it. But the most important thing is, and of the biggest ahas we've seen with Databricks SQL, yes, you can do your ETL, you can do your warehousing, you can your BI. It's very flexible. It's extremely cost-effective. That's the beauty of being built on the open standards and open technology of the Lakehouse. There's a huge value there that you get from a TCO perspective.But the aha moments that we've really seen is when people can do simple AI and realtime stream processing trivially with that data in the data warehouse. That's in DB SQL, you don't have to move to data to another system, you don't need to copy it. You can kind of get those predictive insights almost instantly from the exact same data. So if we go back to that data AI maturity curve that I was talking about, because you don't need to switch systems, you don't need to reload the data, you don't need to change your tooling set, you don't need to get it retrained on it. You can go from basic BI to AI aha with just a couple commands. So it's very simple to go along that whole journey.– [Jillian] Yeah, yeah. So I think to summarize, Databricks SQL is a serverless data warehouse on the Lakehouse, but it's part of a broader platform that does ETL, streaming, data sense, machine learning, so all in one. That's, I guess, one of the key differentiator.Actually I see one question that's coming through the screen that's a great segue in the next section because they wanted to talk about Photon a little bit. So can you elaborate on the difference between Photon and Databricks SQL?– [Shant Hovsepian] Oh, yeah. So Photon is a general purpose compute engine. Essentially, it's a technology and it's like the engine behind the scenes that crunches numbers. So if you take a car, let's say a pickup truck, and you look at it from the outside, it's a pickup truck, but it may have a V8 engine and may have a V4 engine. In the case of Photon, it's like a V12 superpower engine. DB SQL is the truck, right? DB SQL is a data warehouse product for your Lakehouse architecture. It's got SQL, it integrates with Unity Catalog, [inaudible 01:52:06] governance and all those things. The Photon is really just that new MPP built for modern new CPUs. Most data warehouses out there were designed 30, 40 years ago, probably before I was born, and they were built for the type of hardware that existed back in the day. Photon is really optimized for the new set of modern CPU and data center technologies that exists.– [Jillian] Okay, thanks, Sean. So can you elaborate on how does it compare with stock? What was the journey between stock and Photon and what difference between both?– [Shant Hovsepian] Yeah. First of all, if people are interested in this topic, we published a great paper, it's academic paper, a few months ago at SIGMOD Conference. If you just Google for Photon paper, SIGMOD paper, we can provide some links. It's got way more details and information there.But yeah, the beauty of Photon is, it's essentially 100% Spark API compatible. Spark is a much bigger distributed compute system. It's not just an execution engine, but it also has different APIs like the RDD API. Photon is really specifically focused on the data frame API. It has task schedulers, has job management, it has a lot of the things that you need for distributed AI processing, broadcast variables, scheduling. So Spark is a bigger system. Photon is more just like the expression evaluation engine. They're both level with each other.– [Jillian] Absolutely. Down the line, Photon is Databricks proprietary, right? So it's compatible with the Spark APIs like to mention, but that's becoming our default engine over time on the Lakehouse platform. Whereas Spark will remain open source, we're still committed to Spark. Photon is curating Spark in our platform, is that right?– [Shant Hovsepian] Yes. Apache Spark and Databricks, it's called the Databricks Runtime. So that's essentially our version of Spark. It has a bunch of enhancements and features. Mind you, Databricks loves open source and we're 100% committed to it. We make the most contributions, enhancements, Apache Spark. So we do a lot of work with Apache Spark. Just our version of Apaches Spark is the Databricks Runtime called the DBR for short. And Photon is just like an acceleration piece for DBR, the Databricks Runtime.– [Jillian] Yeah. Is there anything special that customers need to do to use Photon on the platform?– [Shant Hovsepian] Absolutely not. They just need to make sure they're using... I tend to recommend a very recent version of Databricks Runtime. So when you use a data science, data engineering workspace cluster, you want to go with a DBR version... The most recent one you can find is 11.2 right now. Once you pick that DBR version, there'll be an option that says, "Enable Photon." You just check that box and you'll get the awesome features and functionality.– [Jillian] Yeah. And on Databricks SQL, it's just on by default, so there's really nothing to worry about.– [Shant Hovsepian] With the Databricks SQL, Photon... There's all sorts of things buried inside Databricks SQL that makes it awesome, but very much a self-contained data warehouse product where Photon is like a feature in your Spark.– [Jillian] Right. We're almost on top of the hour, so this is my last question for you.This is a pricing question because we had a few and think it's important to address. "Is Unity Catalog available on standard Databricks or premium? How about Databricks SQL Are there any extra cost enabling Photon acceleration on the cluster as well?"If you could talk to that, it would be fantastic and then we'll be ready to wrap up.– [Shant Hovsepian] Cool. So right now Unity Catalog is available in premium and enterprise SKUs, available in the standard SKU. The Databricks standard SKU in general doesn't have table access controls for various features like that. So a lot of the governance features that are important for many types of data warehousing workloads aren't available in the standard SKU. So Databricks SQL also isn't available in the standard SKU, it's available in premium and enterprise SKUs. Nobody else on the call is correcting me so I believe that is still true and it hasn't changed. But suffice it to say, we want data to be secure and governed everywhere. So I think... we find better ways of unlocking the unity and Unity Catalog for every workload and every... I think the second question you asked was about-– [Jillian] Yeah, and for Photon, as well. Do you want to speak about that?– [Shant Hovsepian] Yeah, yeah. When you enable photon, I showed some of it in the slides, we've seen customers on average get like a 30% savings in overall TCO. So not only do things run faster, but they cost less money because you don't need to keep your compute resources up longer. So overall, TCO, we've seen tremendous savings. So at the end of the day, it won't cost you any more money to use Photon, it ends up costing less money total. But that said, in Databricks, when you do enable Photon, it's a different TPU billing rate. So it's charged at a different rate, but you will not keep your compute resources up longer because everything's so much faster. So you always end up saving money in the long run.– [Jillian] Yes, absolutely. Thank you, Sean.We're now on top of the hour, so that conclude our event for today. I just want to say thank you again to all of our attendees for your time and sitting with us today, our presenters for creating all of this amazing content. We hope it was helpful and we-Join usProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/brad-corwin/#
|
Brad Corwin - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingBrad CorwinChief Data Scientist at Booz Allen HamiltonBack to speakersBrad Corwin is a Chief Data Scientist at Booz Allen Hamilton with over a decade of professional experience in software engineering, data engineering and data science. He has a focus on innovative techniques and operationalizing data science solutions. He provides thought leadership to maximize the outcome and has a passion for building data-driven solutions in a rapid Agile environment. He currently leads Advana’s Data Science and Data Engineering team to accelerate data and AI delivery.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/paul-roome
|
Paul Roome - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingPaul RoomeStaff Product Manager at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/explore/de-data-warehousing/data-management-101
|
Data Management 101
Thumbnails
Document Outline
Attachments
Layers
Current Outline Item
Previous
Next
Highlight All
Match Case
Match
Diacritics
Whole Words
Color
Size
Color
Thickness
Opacity
Presentation Mode
Open
Print
Download
Current View
Go to First Page
Go to Last Page
Rotate Clockwise
Rotate Counterclockwise
Text Selection Tool
Hand Tool
Page Scrolling
Vertical Scrolling
Horizontal Scrolling
Wrapped Scrolling
No Spreads
Odd Spreads
Even Spreads
Document Properties…
Toggle Sidebar
Find
Previous
Next
Presentation Mode
Open
Print
Download
Current View
FreeText Annotation
Ink Annotation
Tools
Zoom Out
Zoom In
Automatic Zoom
Actual Size
Page Fit
Page Width
50%
75%
100%
125%
150%
200%
300%
400%
More Information
Less Information
Close
Enter the password to open this PDF
file:
Cancel
OK
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Page Size:
-
Fast Web View:
-
Close
Preparing document for printing…
0%
Cancel
|
https://www.databricks.com/dataaisummit/speaker/james-demmel
|
James Demmel - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJames DemmelDr. Richard Carl Dehmel Distinguished Professor at University of California, BerkeleyBack to speakersProf. James Demmel is a Distinguished Professor of Mathematics and Computer Science at the University of California, Berkeley. He is the former chair of the Computer Science Division and EECS Department of UC Berkeley. Demmel is known for his work on numerical linear algebra libraries, including LAPACK, ScaLAPACK and SuperLU. Demmel’s work on high-performance computing, such as communication-avoiding algorithms, has been recognized by many honors and awards. He is a member of the National Academy of Sciences, National Academy of Engineering, American Academy of Arts and Sciences; Fellow of the AAAS, ACM, AMS, IEEE, and SIAM; ACM Paris Kanellakis Theory and Practice Award, IPDPS Charles Babbage Award, IEEE Sidney Fernbach Award, 13 best paper prizes. Demmel was one of just two scientists honored the Leslie Fox Prize for Numerical Analysis in 1986 and he was the winner of the J.H. Wilkinson Prize in Numerical Analysis and Scientific Computing, the IEEE's Sidney Fernbach Award "for computational science leadership in creating adaptive, innovative, high-performance linear algebra software", and the IEEE Computer Society Charles Babbage Award.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/hannes-muhleisen
|
Hannes Mühleisen - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingHannes MühleisenCo-Founder and CEO at DuckDB LabsBack to speakersProfessor Dr. Hannes Mühleisen is a creator of the DuckDB database management system and the co-founder and CEO of DuckDB Labs, a consulting company providing services for DuckDB. He is also a senior researcher of the Database Architectures group at the Centrum Wiskunde & Informatica (CWI) in Amsterdam and a Professor of Data Engineering at Radboud University. Hannes’ main interest is analytical data management systems.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/solutions
|
Databricks Solution Accelerators - Databricks Use CasesSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks for IndustryNo compromise data analytics and AI solutions purpose-built for your industryGet startedSchedule a demoDiscover the Lakehouse for your industryCommunications, Media & EntertainmentEarn more attention, capture more imaginationsLearn howFinancial ServicesBuild more trust, ensure peace of mindLearn howHealthcare and Life SciencesDiscover and deliver better careLearn howRetail and Consumer GoodsLead customers on their journey and champion your brandLearn howBrowse all industriessearchHide filtersIndustry🤔No results available. Try adjusting the filters or start a new search.reset the listIndustry SolutionsFrom idea to proof of concept in as little as two weeksDatabricks Solution Accelerators are purpose-built guides — fully functional notebooks and best practices — that speed up results. Databricks customers are saving hours of discovery, design, development and testing, with many going from idea to proof of concept (PoC) in as little as two weeks.Explore AcceleratorsReady to start?Explore what Databricks can do for you by starting of with a free trial.Start your trialProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/derek-slager/#
|
Derek Slager - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingDerek SlagerCTO at AmperityBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/it/company/contact
|
Contattaci - DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT
Addio, Data Warehouse. Ciao, Lakehouse.
Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno.
Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT
Scopri la Lakehouse for Manufacturing
Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons
26–29 giugno 2023
Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWContattiHai bisogno di aiuto per la formazione o l'assistenza? Consulta queste risorse aggiuntive.DocumentazioneLeggi la documentazione tecnica per Databricks su AWS, Azure o Google CloudCommunity di DatabricksDiscuti, condividi e interagisci con utenti ed esperti di DatabricksTrainingPadroneggia la Databricks Lakehouse Platform con corsi di formazione autogestiti o con istruttore, oppure diventa sviluppatore certificatoSupportoSei già cliente? Clicca qui se hai un problema tecnico o di pagamentoLe nostre sediScopri tutte le nostre sedi nel mondo ed entra in contattoKnowledge baseTrova rapidamente le risposte alle domande più frequenti sui prodotti e servizi DatabricksProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte
in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California
|
https://www.databricks.com/blog/2021/11/17/databricks-open-source-genomics-toolkit-outperforms-leading-tools.html
|
How Glow Performs Genetic Association Studies 10x More Efficiently Than Hail - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorDatabricks’ Open Source Genomics Toolkit Outperforms Leading Toolsby William BrandlerNovember 17, 2021 in Engineering BlogShare this postCheck out the solution accelerator to download the notebooks referred throughout this blog. Genomic technologies are driving the creation of new therapeutics, from RNA vaccines to gene editing and diagnostics. Progress in these areas motivated us to build Glow, an open-source toolkit for genomics machine learning and data analytics. The toolkit is natively built on Apache Spark™, the leading engine for big data processing, enabling population-scale genomics.The project started as an industry collaboration between Databricks and the Regeneron Genetics Center. The goal is to advance research by building the next generation of genomics data analysis tools for the community. We took inspiration from bioinformatics libraries such as Hail, Plink and bedtools, married with best-in-class techniques for large-scale data processing. Glow is now 10x more computationally efficient than industry leading tools for genetic association studies.The vision for Glow and genomic analysis at scaleThe primary bottleneck slowing the growth in genomics is the complexity of data management and analytics. Our goal is to make it simple for data engineers and data scientists who are not trained in bioinformatics to contribute to genomics data processing in distributed cloud computing environments. Easing this bottleneck will in turn drive up the demand for more sequencing data in a positive feedback loop.When to use GlowGlow’s domain of applicability falls in aggregation and mining of genetic variant data. Particularly for data analyses that are run many times iteratively or that take more than a few hours to complete, such as:Annotation pipelinesGenetic association studiesGPU-based deep learning algorithmsTransforming data into and out of bioinformatics tools.As an example, Glow includes a distributed implementation of the Regenie method. You can run Regenie on a single node, which is recommended for academic scientists. But for industrial applications, Glow is the world’s most cost effective and scalable method of running thousands of association tests. Let’s walk through how this works.Benchmarking Glow against HailWe focused on genetic association studies for benchmarks because they are the most computationally intensive steps in any analytics pipeline. Glow is >10x more performant for Firth regression relative to Hail without trading off accuracy (Figure 1). We were able to achieve this performance because we apply an approximate method first, restricting the full method to variants with a suggestive association with disease (P Glow documentation.to set up the environment.Glow on the Databricks Lakehouse PlatformWe had a small team of engineers working on a tight schedule to develop Glow. So how were we able to catch up with the world’s leading biomedical research institute, the brain power behind Hail? We did it by developing Glow on the Databricks Lakehouse Platform in collaboration with industry partners. Databricks provides infrastructure that makes you productive with genomics data analytics. For example, you can use Databricks Jobs to build complex pipelines with multiple dependencies (Figure 2).Furthermore, Databricks is a secure platform trusted by both Fortune 100 and healthcare organizations with their most sensitive data, adhering to principles of data governance (FAIR), security and compliance (HIPAA and GDPR).Figure 2: Glow on the Databricks Lakehouse PlatformWhat lies in store for the future?Glow is now at a v1 level of maturity, and we are looking to the community to help contribute to build and extend it. There’s lots of exciting things in store.Genomics datasets are so large that batch processing with Apache Spark can hit capacity limits of certain cloud regions. This problem will be solved by the open Delta Lake format, which unifies batch and stream processing. By leveraging streaming, Delta Lake enables incremental processing of new samples or variants, with edge cases quarantined for further analysis. Combining Glow with Delta Lake will solve the “n+1 problem” in genomics.A further problem in genomics research is data explosion. There are over 50 copies of the Cancer Genome Atlas on Amazon Web Services alone. The solution proposed today is a walled garden, managing datasets inside genomics domain platforms. This solves data duplication, but then locks data into platforms.This friction will be eased through Delta Sharing, an open protocol for secure real-time exchange of large datasets, which will enable secure data sharing between organizations, clouds and domain platforms. Unity Catalog will then make it easy to discover, audit and govern these data assets.We’re just at the beginning of the industrialization of genomics data analytics. To learn more, please see the Glow documentation, tech talks on YouTube, and workshops.Try Databricks for freeGet StartedSee all Engineering Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/ben-coppersmith/#
|
Ben Coppersmith - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingBen CoppersmithSr. Manager, Data Platform at Disney StreamingBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/ioannis-papadopoulos
|
Ioannis Papadopoulos - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingIoannis PapadopoulosCloud Technologist at DatabricksBack to speakersIoannis is a Cloud Technologist at Databricks, working in collaboration with the technical fields of AWS, Azure, and GCP.
He started his career at CERN as a research physicist and then moved to Apple to lead the business development of the research markets in EMEA.
Before joining Databricks, Ioannis had co-founded 3 startups, where he championed the development of serverless architectures on the cloud.
Ioannis holds a Ph.D. in Physics, an MBA, and an Executive Master in digital transformation.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/p/ebook/databricks-named-leader-by-gartner
|
Databricks Named a Leader | Databricks2022 Gartner® Magic Quadrant™Databricks Named a LeaderCloud Database Management SystemsDatabricks named a Leader in 2022 Gartner® Magic Quadrant™ CDBMSDatabricks is proud to announce that Gartner has named us a Leader in the 2022 Magic Quadrant for Cloud Database Management Systems for the second consecutive year.We believe this recognition validates our vision for the lakehouse as a single, unified platform for data management and engineering — as well as, analytics and AI.Download the report to learn why Gartner named Databricks a Leader and gain additional insight into the benefits that a lakehouse platform can bring to your organization.Access the Report Gartner, Magic Quadrant for Cloud Database Management Systems, Henry Cook, Merv Adrian, Rick Greenwald, Xingyu Gu, 13 December 2022.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Databricks.ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/ori-zohar
|
Ori Zohar - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingOri ZoharPrincipal Product Marketing Manager at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/lior-gavish/#
|
Lior Gavish - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLior GavishCTO and Co-founder at Monte Carlo DataBack to speakersLior Gavish is CTO and Co-Founder of Monte Carlo, a data reliability company backed by Accel, Redpoint Ventures, GGV, ICONIQ Growth, Salesforce Ventures, and IVP. Prior to Monte Carlo, Lior co-founded cybersecurity startup Sookasa, which was acquired by Barracuda in 2016. At Barracuda, Lior was SVP of Engineering, launching award-winning ML products for fraud prevention. Lior holds an MBA from Stanford and an MSC in Computer Science from Tel-Aviv University.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/gaurav-saraf
|
Gaurav Saraf - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingGaurav SarafProduct Manager at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/jp/company/diversity
|
ダイバーシティ&インクルージョン |DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT
さようなら、データウェアハウス。こんにちは、レイクハウス。
データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。
今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)
製造業のためのレイクハウスを発見する
コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日
直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOWデータの未来を共有Databricks は、ダイバーシティ・エクイティ・インクルージョンを推進しています。データドリブン。多様性を重視。私たちのミッションはビッグデータの多様化です。そのために、チーム自身の多様化から始めています。多様な知識、経験、視点、洞察力、スキルがイノベーションを促進し、社員同士やお客さまとのつながりを深めていくと信じているからです。私たちは、誰もが自分のキャリアの中で最高の仕事をするちからを与えられるような、帰属意識の高い文化を育むことに努めています。同等の労働に対する同一賃金の確保から、チームを尊重、教育、成長させるプログラムの構築まで、ダイバーシティ、エクイティ、インクルージョン(DEI)は、私たちのすべての行動に織り込まれています。Fair Pay Workplace 認証
私たちは、Fair Pay Workplace によって認定された最初の 6 つの組織のうちの 1 社です。給与の公平性を達成するため の努力の一環として、Datbricks は給与データと慣行の厳格な評価を受け、継続的な給与公平性分析を通じて説明責任を果たすことにコミットしました。Gaingels、Flucas Ventures
私たちのダイバーシティ、エクイティ、インクルージョンへのコミットメントは、私たちと共に働く社員から私たちに投資する人々まで、チーム全体を包含しています。だからこそ、私たちは、DEI への深いコミットメントを示す資金調達企業に焦点を当てている Gaingels や Flucas Ventures など、世界クラスの投資家と提携できることを誇りに思っています。「私たちは、データが多ければ多いほど、より優れた知見が得られることを知っています。Databricks は、可能な限り多様な背景、経験、視点を取り入れることで、成功を収めているのです。」Databricks 共同創業者兼 CEO アリ・ゴティシ私たちのコミュニティ私たちは、社員の力を高めることが、私たちの潜在能力を最大限に引き出すカギになると考えています。Databricks の活気ある従業員リソースグループ(ERG)は、従業員主導の集団で、Datbricks のインクルーシブで協力的な環境を作るうえで重要な役割を担っています。社員や仲間から成る多様なコミュニティは、つながりと尊重の空間を作り出し、重要な問題に対する意識を高めています。Queeries Network
Veterans Network
Black Employee Network
Women’s Network
LatinX Network
Asian Employee Network
社員の声「Databricks では、自分が所属したいと思えるようなチームを作っています。それは、本物の自分であるための安全性を確保し、同じ快適な空間を周りの人たちに提供することを意味します。私たちが互いに示す気づかいや、イ ンクルージョンへの献身が、私たちのコミュニティ全体に与える影響は大きいのです。」エンジニアリング部門ディレクター Stacy Kerkelaもっと読む「Datbricks に LatinX Network があることで、より強いコミュニティ意識、オープンなコミュニティを感じることができます。Datbricks では、従業員リソースグループが、経歴に関係なく、全ての人を歓迎し、教育しています。 」シニアソリューションアーキテクト Miguel Peralvoもっと読む「多くの企業がジェンダー・ダイバーシティを高めることに注力していますが、変革を支援するために目に見える形で行動を起こしている企業の一員であることは素晴らしいことです。」製品部門プログラムマネジメント Allie Emrich もっと読むインクルージョンの実践レポートColorStack、Rewriting the Code とのパートナーシップ私たちの大学採用チームは、ColorStack および Rewriting the Code と提携し、より多くの女性や黒人、ラテンアメリカ系学生が技術分野のキャリアを追求できるよう支援しています。もっと読むレポートWomen in Tech メンターシッププログラムDatabricks では、歴史的に代表的なコミュニティに対して、専門的な成長と昇進の機会を提供することの重要性を認識しています。もっと読むレポート2021 InHerSight AwardDatabricks は、女性にとって最高のコンピューターソフトウェア職場の 1 つに女性から選ばれたことを嬉しく思います。詳しくは、InHerSight のプロフィールでご覧いただけます。もっと読むデータの未来を、一緒に作りませんか?データと AI の簡素化と民主化をミッションに掲げています。これは、社員のちからなしでは成り立ちません。詳しく見る製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ 採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利
|
https://www.databricks.com/glossary/personalized-banking
|
Personalized FinancePlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWPersonalized FinanceAll>Personalized FinanceTry Databricks for freeGet StartedWhat is Personalized Finance?Financial products and services are becoming increasingly commoditized and consumers are becoming more discerning as the media and retail industries have increased their penchant for personalized experiences. To remain competitive, banks have to offer an engaging banking experience that goes beyond traditional banking via personalized insights, recommendations, setting financial goals and reporting capabilities – all powered by advanced analytics like geospatial or natural language processing (NLP). Personalized finance, also known as open finance, is based on data-sharing principles that can empower banks to offer a broader range of possibilities to their clients that are suited specifically to their needs. Personalized finance is made possible through open banking (refer to next section) standards and evolving regulations across the world.What does Personalized Finance look like in practice?In today's on-demand culture, personalized finance means that customers want their banks, insurance carrier or wealth manager to meet them where they are—in the products and channels that they use. For example, real-time installment lending, also called Buy Now Pay Later (BNPL) is automatically added on to your retail shopping experience. Perhaps you've recently opened your banking app and noticed some new features to add accounts from other banks to help you monitor all your accounts at once, or you added your bank account to your investing app to see how much you can invest this month. These are examples of personalized finance that gives consumers more control over their financial well-being. Another example, Spanish bank BBVA's recent blog1 describes in detail how the bank "uses data science to identify the characteristics that define them (always with their prior consent) and therefore offer recommendations on how to manage their everyday finances, lower their debt, save or plan for the future."Why is Personalized Finance important?Customers want more choice, control and a seamless customer experience. Beyond their account or credit card balances, customers increasingly want access to information about their finances that can help them make more informed decisions about their money and financial goals. At the same time, they expect to be presented with personalized offers that best fit their investment posture and preferences.
> 72% of customers rate personalization as "Highly Important" in today's financial services landscape
> 60% of consumers say they are likely to become repeat buyers after a personalized shopping experienceFor the financial services institution (FSI), personalized finance benefits are:Foster customer loyalty and retention by providing an enhanced customer experience that is targeted to their needs and behavior.Increase in engagement and conversion rates, resulting in a greater share of wallet/ higher customer lifetime value.Stronger marketing ROI through targeted marketing campaigns and consistent messaging across channels.What are the challenges in implementing Personalized Finance?Legacy Infrastructure. Legacy technologies can't harness insights from fast-growing unstructured and alternative data sets — and don't offer open data sharing capabilities to fuel collaboration.Strict data and privacy regulations. A number of high profile instances of data theft and breaches have made many consumers more cautious about sharing their personal data.Access to third party data. Vendor lock-in and disjointed tools hinder the ability to do real-time analytics that drives and democratizes smarter financial decisions.Data silos. Highly complex workflows, disparate technologies, and spreadsheet-culture makes collaboration difficult and keeps data trapped in silos across multiple business units.How does Databricks help financial institutions with Personalized Finance?Databricks Lakehouse for Financial Services provides banking, insurance and capital markets companies with the ability to unify data and AI on an open and collaborative platform to deliver personalized customer experiences, minimize risk, and accelerate innovation. It eliminates the technical limitations of legacy systems, and enables FSIs to leverage all of their data to minimize risk while accelerating transformative innovation. It allows FSIs to aggregate different types of data — from market to alternative data — enabling hyper-personalized experiences that drive cross-selling opportunities, customer satisfaction and share of wallet. By unifying data and AI, FSIs are also able to simplify the complexity of regulatory reporting, risk management and compliance by securely streamlining the acquisition, processing and transmission of data to empower better data governance practices.1 How BBVA uses data to look after its customers' financial healthAdditional ResourcesBig Book of Use Cases in Financial ServicesGartner® Hype Cycle™ for Financial Data and Analytics Governance, 2022Hyper Personalization Accelerator for Banks and FintechsReshaping Retail Banking with Personalization workshop with DeloitteLakehouse for Financial Services solutionsBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/franco-patano/#
|
Franco Patano - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFranco PatanoProduct Specialist at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/speaker/antonio-castelo/#
|
Antonio Castelo - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAntonio Castelo CollibraBack to speakersAntonio built the first partner integration with Unity Catalog, and has been instrumental in building customer momentum around the partnership with Databricks.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/p/webinar/202107-amer-industry-healthcare-and-life-sciences-workshop-extract-real-world-da
|
Resources - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWResourcesLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/glossary/hosted-spark
|
What is Hosted Spark?PlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWHosted SparkAll>Hosted SparkTry Databricks for freeGet StartedWhat is Hosted Spark?Apache Spark is a fast and general cluster computing system for Big Data built around speed, ease of use, and advanced analytics that was originally built in 2009 at UC Berkeley. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. In addition, it also supports several other tools such as Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.Spark Provides Two Modes for Data Exploration:InteractiveBatch For a simplified end-user interaction, Spark is also provided to organizations in a unified hosted data platform. In the absence of direct access to Spark resources by remote applications, the user had to face a longer route to production. In order to overcome this obstacle, there have been created services that enable remote apps to efficiently connect to a Spark cluster over a REST API from anywhere. These interfaces support the execution of snippets of code or programs in a Spark context that runs locally or in Apache Hadoop YARN. Hosted Spark interfaces proved to be turnkey solutions as they facilitate the interaction between Spark and application servers, streamlining the architecture required by interactive web and mobile apps.Hosted Spark Services Provide These Features:Interactive Scala, Python, and R coveringsBatch submissions in Scala, Java, PythonMultiple users are able to share the same serverAllows users to submit jobs from anywhere through RESTNo code change is required do be done to your programsOrganizations can now easily overcome the existing bottlenecks that impede their ability to operationalize Spark, and instead, focus on capturing the value promised by big data. Additional ResourcesAbout Apache SparkAzure Databricks—Apache Spark as a ServiceLearning Apache Spark 2nd Edition eBookBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/explore/data-science-machine-learning/dsml-demo?itm_data=DSproduct-pf-dsml
|
Data Science and Machine Learning | Databricks Demo
Video is muted due to browser restrictions. Adjust the volume on the video player to unmute.
Selected language is not available in captions.
|
https://www.databricks.com/de/discover/beacons
|
Beacons Hub Page | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks Beacons ProgramThe Databricks Beacons program is our way to thank and recognize the community members, data scientists, data engineers, developers and open source enthusiasts who go above and beyond to uplift the data and AI community.Whether they are speaking at conferences, leading workshops, teaching, mentoring, blogging, writing books, creating tutorials, offering support in forums or organizing meetups, they inspire others and encourage knowledge sharing – all while helping to solve tough data problems.Meet the Databricks BeaconsBeacons share their passion and technical expertise with audiences around the world. They are contributors to a variety of open source projects including Apache Spark™, Delta Lake, MLflow and others. Don’t hesitate to reach out to them on social to see what they’re working on.ISRAELAdi PolakAdi is a Senior Software Engineer and Developer Advocate in the Azure Engineering organization at Microsoft.FRANCEBartosz KoniecznyBartosz is a Data Engineering Consultant and an instructor.
UNITED STATESR. Tyler CroyTyler, the Director of Platform Engineering at Scribd, has been an open source developer for over 14 years.CHINAKent YaoKent is an Apache Spark™ committer and a staff software engineer at NetEase.IRELANDKyle HamiltonKyle is the Chief Innovation and Data Officer at iQ4, and a lecturer at the University of California, Berkeley.POLANDJacek LaskowskiJacek is an IT freelancer who specializes in Apache Spark™, Delta Lake and Apache Kafka.UNITED STATESScott HainesScott is a Distinguished Software Engineer at Nike where he helps drive Apache Spark™ adoption.UNITED KINGDOMSimon WhiteleySimon is the Director of Engineering at Advancing Analytics, is a Microsoft Data Platform MVP and Data + AI Summit speaker.UNITED STATESGeeta ChauhanGeeta leads AI/PyTorch Partnership Engineering at Facebook AI and focuses on strategic initiatives.SWITZERLANDLorenz WalthertLorenz Walthert is a data scientist, MLflow contributor, climate activist and a GSoC participant.CANADAYitao LiYitao is a software engineer at SafeGraph and the current maintainer of sparklyr, an R interface for Apache Spark™.POLANDMaciej SzymkiewiczMaciej is an Apache Spark™ committer. He is available for mentoring and consulting.JAPANTakeshi YamamuroTakeshi is a software engineer, Apache Spark™ committer and PMC member at NTT, Inc., who mainly works on Spark SQL.Membership CriteriaBeacons are first and foremost practitioners in the data and AI community whose technology focus includes MLflow, Delta Lake, Apache Spark™, Databricks and related ecosystem technologies. Beacons actively build others up throughout the year by teaching, blogging, speaking, mentoring, organizing meetups, creating content, answering questions on forums and more.Program BenefitsPeer networking and sharing through a private Slack channelAccess to Databricks and OSS subject matter expertsRecognition on the Databricks website and social channelsCustom swagIn the future, sponsored travel and lodging to attend select
Databricks eventsSponsorship and swag for meetupsNominate a peerWe’d love to hear from you! Tell us who made continued outstanding contributions to the data and AI community. Candidates must be nominated by someone in the community, and everyone — including customers, partners, Databricks employees or even a current Beacon — is welcome to submit a nomination. Applications will be reviewed on a rolling basis, and membership is valid for one year.NominateProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/fr/legal/privacynotice#dbadditionalinformation
|
Privacy Notice | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesPrivacy NoticeThis Privacy Notice explains how Databricks, Inc. and its affiliates ( “Databricks”, “we”, “our”, and “us”) collects, uses, shares and otherwise processes your personal information (also known as personal data) in connection with the use of Databricks websites and applications that link to this Privacy Notice (the “Sites”), our data processing platform products and services (the “Platform Services”) and in the usual course of business, such as in connection with our events, sales, and marketing activities (collectively, “Databricks Services”). It also contains information about your choices and privacy rights.Our ServicesWe provide the Platform Services to our customers and users (collectively, “Customers”) under an agreement with them and solely for their benefit and the benefit of personnel authorized to use the Platform Services (“Authorized Users”). Our processing of such data is governed by our agreement with the relevant Customer. This Privacy Notice does not apply to (i) the data that our Customers upload, submit or otherwise make available to the Platform Services and other data that we process on their behalf, as defined in our agreement with the Customer; (ii) any products, services, websites, or content that are offered by third parties or that have their own privacy notice; or (iii) personal information that we collect and process in connection with our recruitment activities, which is covered under our Applicant Privacy Notice.We recommend that you read this Privacy Notice in full to ensure that you are informed. However, if you only want to access a particular section of this Privacy Notice, you can click on the link below to go to that section.Information We Collect About YouHow We Use Your InformationHow We Share Your InformationInternational TransfersYour Choices and RightsAdditional Information for Certain JurisdictionsOther Important InformationChanges to this NoticeHow to Contact UsInformation We Collect About YouInformation that we collect from or about you includes information you provide, information we collect automatically, and information we receive from other sources.Information you provideWhen using our Databricks Services, we may collect certain information, such as your name, email address, phone number, postal address, job title, and company name. We may also collect other information that you provide through your interactions with us, for example if you request information about our Platform Services, interact with our sales team or contact customer support, complete a survey, provide feedback or post comments, register for an event, or take part in marketing activities. We may keep a record of your communications with us and other information you share during the course of the communications.When you create an account, for example, through our Sites or register to use our Platform Services, we may collect your personal information, such as your name and contact information. We may also collect credit card information if chosen by you as a payment method, which may be shared with our third party service providers, including for payment and billing purposes. Information we collect automatically We use standard automated data collection tools, such as cookies, web beacons, tracking pixels, tags, and similar tools, to collect information about how people use our Sites and interact with our emails.For example, when you visit our Sites we (or an authorized third party) may collect certain information from you or your device. This may include information about your computer or device (such as operating system, device identifier, browser language, and Internet Protocol (IP) address), and information about your activities on our Sites (such as how you came to our Sites, access times, the links you click on, and other statistical information). For example, your IP address may be used to derive general location information. We use this information to help us understand how you are using our Sites and how to better provide the Sites to you. We may also use web beacons and pixels in our emails. For example, we may place a pixel in our emails that notifies us when you click on a link in the email. We use these technologies to improve our communications. The types of data collection tools we use may change over time as technology evolves. You can learn more about our use of cookies and similar tools, as well as how to opt out of certain data collection, by visiting our Cookie Notice. When you use our Platform Services, we automatically collect information about how you are using the Platform Services (“Usage Data”). While most Usage Data is not personal information, it may include information about your account (such as User ID, email address, or Internet Protocol (IP) address) and information about your computer or device (such as browser type and operating system). It may also include information about your activities within the Platform Services, such as the pages or features you access or use, the time spent on those pages or features, search terms entered, commands executed, information about the types and size of files analyzed via the Platform Services, and other statistical information relating to your use of the Platform Services. We collect Usage Data to provide, support and operate the Platform Services, for network and information security, and to better understand how our Authorized Users and Customers are using the Platform Services to improve our products and services. We may also use the information we collect automatically (for example, IP address, and unique device identifiers) to identify the same unique person across Databricks Services to provide a more seamless and personalized experience to you. Information we receive from other sourcesWe may obtain information about you from third party sources, including resellers, distributors, business partners, event sponsors, security and fraud detection services, social media platforms, and publicly available sources. Examples of information that we receive from third parties include marketing and sales information (such as name, email address, phone number and similar contact information), and purchase, support and other information about your interactions with our Sites and Platform Services. We may combine such information with the information we receive and collect from you.How We Use Your InformationWe use your personal information to provide, maintain, improve and update our Databricks Services. Our purposes for collecting your personal information include:to provide, maintain, deliver and update the Databricks Services;to create and maintain your Databricks account;to measure your use and improve Databricks Services, and to develop new products and services;for billing, payment, or account management; for example, to identify your account and correctly identify your usage of our products and services;to provide you with customer service and support;to register and provide you with training and certification programs;to investigate security issues, prevent fraud, or combat the illegal or controlled uses of our products and services;for sales phone calls for training and coaching purposes, quality assurance and administration (in accordance with applicable laws), including to analyze sales calls using analytics tools to gain better insights into our interactions with customers; to send you notifications about the Databricks Services, including technical notices, updates, security alerts, administrative messages and invoices;to respond to your questions, comments, and requests, including to keep in contact with you regarding the products and services you use;to tailor and send you newsletters, emails and other content to promote our products and services (you can always unsubscribe from our marketing emails by clicking here) and to allow third party partners (like our event sponsors) to send you marketing communications about their services, in accordance with your preferences;to personalize your experience when using our Sites and Platform Services;for advertising purposes; for example, to display and measure advertising on third party websites;to contact you to conduct surveys and for market research purposes;to generate and analyze statistical information about how our Sites and Platform Services are used in the aggregate;for other legitimate interests or lawful business purposes; for example, customer surveys, collecting feedback, and conducting audits;to comply with our obligations under applicable law, legal process, or government regulation; andfor other purposes, where you have given consent.How We Share Your InformationWe may share your personal information with third parties as follows:with our affiliates and subsidiaries for the purposes described in this Privacy Notice;with our service providers who assist us in providing the Databricks Services, such as billing, payment card processing, customer support, sales and marketing, and data analysis, subject to confidentiality obligations and the requirement that those service providers do not sell your personal information;with our service providers who assist us with detecting and preventing fraud, security threats or other illegal or malicious behavior, for example Sift who provides fraud detection services where your personal information is processed by Sift in accordance with its Privacy Notice available at https://sift.com/service-privacy;with third party business partners, such as resellers, distributors, and/or referral partners, who are involved in providing content, products or services to our prospects or Customers. We may also engage with third party partners who are working with us to organize or sponsor an event to which you have registered to enable them to contact you about the event or their services (but only where we have a lawful basis to do so, such as your consent where required by applicable law);with marketing partners, such as advertising providers that tailor online ads to your interests based on information they collect about your online activity (known as interest-based advertising);with the organization that is sponsoring your training or certification program, for example to notify them of your registration and completion of the course;when authorized by law or we deem necessary to comply with a legal process;when required to protect and defend the rights or property of Databricks or our Customers, including the security of our Sites, products and services (including the Platform Services);when necessary to protect the personal safety, property or other rights of the public, Databricks or our Customers;where it has been de-identified, including through aggregation or anonymization;when you instruct us to do so;where you have consented to the sharing of your information with third parties; orin connection with a merger, sale, financing or reorganization of all or part of our business.International TransfersDatabricks may transfer your personal information to countries other than your country of residence. In particular, we may transfer your personal information to the United States and other countries where our affiliates, business partners and services providers are located. These countries may not have equivalent data protection laws to the country where you reside. Wherever we process your personal information, we take appropriate steps to ensure it is protected in accordance with this Privacy Notice and applicable data protection laws. These safeguards include implementing the European Commission’s Standard Contractual Clauses for transfers of personal information from the EEA or Switzerland between us and our business partners and service providers, and equivalent measures for transfers of personal information from the United Kingdom. Databricks also offers our Customers the ability to enter into a data processing addendum (DPA) that contains the Standard Contractual Clauses, for transfers between us and our Customers. We also make use of supplementary measures to ensure your information is adequately protected. Privacy Shield NoticeDatabricks adheres to the principles of the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks, although Databricks no longer relies on the EU-U.S. or Swiss-U.S. Privacy Shield Frameworks as a legal basis for transfers of personal information in light of the judgment of the Court of Justice of the European Union in Case C-311/18. To learn more, visit our Privacy Shield Notice.Your Choices and RightsWe offer you choices regarding the collection, use and sharing of your personal information and we will respect the choices you make in accordance with applicable law. Please note that if you decide not to provide us with certain personal information, you may not be able to access certain features of the Sites or use the Platform Services.Account informationIf you want to correct, update or delete your account information, please log on to your Databricks account and update your profile.Opt out of marketingWe may periodically send you marketing communications that promote our products and services consistent with your choices. You may opt out from receiving such communications, either by following the unsubscribe instructions in the communication you receive or by clicking here. Please note that we may still send you important service-related communications regarding our products or services, such as communications about your subscription or account, service announcements or security information.Your privacy rightsDepending upon your place of residence, you may have rights in relation to your personal information. Please review the jurisdiction specific sections below, including the disclosures for California residents. Depending on applicable data protection laws, those rights may include asking us to provide certain information about our collection and processing of your personal information, or requesting access, correction or deletion of your personal information. You also have the right to withdraw your consent, to the extent we rely on consent to process your personal information. If you wish to exercise any of your rights under applicable data protection laws, submit a request online by completing the request form here or emailing us at [email protected]. We will respond to requests that we receive in accordance with applicable laws. Databricks may take certain steps to verify your request using information available to us, such as your email address or other information associated with your Databricks account, and if needed we may ask you to provide additional information for the purposes of verifying your request. Any information you provide to us for verification purposes will only be used to process and maintain a record of your request.As described above, we may also process personal information that has been submitted by a Customer to our Platform Services. If your personal information has been submitted to the Platform Services by or on behalf of a Databricks Customer and you wish to exercise your privacy rights, please direct your request to the relevant Customer. For other inquiries, please contact us at [email protected].Additional Information for Certain JurisdictionsThis section provides additional information about our privacy practices for certain jurisdictions.CaliforniaIf you are a California resident, the California Consumer Privacy Act (“CCPA”) requires us to provide you with additional information regarding your rights with respect to your “personal information. This information is described in our Supplemental Privacy Notice to California Residents. Other US StatesDepending on applicable laws in your state of residence, you may request to: (1) confirm whether or not we process your personal information; (2) access, correct, or delete personal information we maintain about you; (3) receive a portable copy of such personal information; and/or (4) restrict or opt out of certain processing of your personal information, such as targeted advertising, or profiling in furtherance of decisions that produce legal or similarly significant effects. If we refuse to take action on a request, we will provide instructions on how you may appeal the decision. We will respond to requests consistent with applicable law.European Economic Area, UK and SwitzerlandIf you are located in the European Economic Area, United Kingdom or Switzerland, the controller of your personal information is Databricks, Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105, United States. We only collect your personal information if we have a legal basis for doing so. The legal basis that we rely on depends on the personal information concerned and the specific context in which we collect it. Generally, we collect and process your personal information where:We need it to enter into or perform a contract with you, such as to provide you with the Platform Services, respond to your request, or provide you with customer support;We need to process your personal information to comply with a legal obligation (such as to comply with applicable legal, tax and accounting requirements) or to protect the vital interests of you or other individuals;You give us your consent, such as to receive certain marketing communications; orWhere we have a legitimate interest, such as to respond to your requests and inquiries, to ensure the security of the Sites and Platform Services, to detect and prevent fraud, to maintain, customize and improve the Sites and Platform Services, to promote Databricks and our Platform Services, and to defend our interests and rights.If you have consented to our use of your personal information for a specific purpose, you have the right to change your mind at any time but this will not affect our processing of your information that has already taken place. You also have the following rights with respect to your personal information:The right to access, correct, update, or request deletion of your personal information;The right to object to the processing of your personal information or ask that we restrict the processing of your personal information;The right to request portability of your personal information;The right to withdraw your personal information at any time, if we collected and processed your personal information with your consent; andThe right to lodge a complaint with your national data protection authority or equivalent regulatory body.If you wish to exercise any of your rights under data protection laws, please contact us as described under “Your Choices and Rights”.Other Important InformationNotice to Authorized UsersOur Platform Services are intended to be used by organizations. Where the Platform Services are made available to you through an organization (e.g., your employer), that organization is the administrator of the Platform Services and responsible for the accounts and/or services over which it has control. For example, administrators can access and change information in your account or restrict and terminate your access to the Platform Services. We are not responsible for the privacy or security practices of an administrator's organization, which may be different from this Privacy Notice. Please contact your organization or refer to your organization's policies for more information.Data RetentionDatabricks retains the personal information described in this Privacy Notice for as long as you use our Databricks Services, as may be required by law (for example, to comply with applicable legal tax or accounting requirements), as necessary for other legitimate business or commercial purposes described in this Privacy Notice (for example, to resolve disputes or enforce our agreements), or as otherwise communicated to you.SecurityWe are committed to protecting your information. We use a variety of technical, physical, and organizational security measures designed to protect against unauthorized access, alteration, disclosure, or destruction of information. However, no security measures are perfect or impenetrable. As such, we cannot guarantee the security of your information.Third Party ServicesOur Databricks Services may contain links to third party websites, applications, services, or social networks (including co-branded websites or products that are maintained by one of our business partners). We may also make available certain features that allow you to sign into our Sites using third party login credentials (such as LinkedIn, Facebook, Twitter and Google+) or access third party services from our Platform Services (such as Github). Any information that you choose to submit to third party services is not covered by this Privacy Notice. We encourage you to read the terms of use and privacy notices of use of such third party services before sharing your information with them to understand how your information may be collected and used.Children's DataThe Sites and Platform Services are not directed to children under 18 years of age and Databricks does not knowingly collect personal information from children under 18. If we learn that we have collected any personal information from children under 18, we will promptly take steps to delete such information. If you are aware that a child has submitted us such information, please contact us using the details provided below.Changes to this NoticeDatabricks may change this Privacy Notice from time to time. We will post any changes on this page and, if we make material changes, provide a more prominent notice (for example, by adding a statement to the website landing page, providing notice through the Platform Services login screen, or by emailing you). You can see the date on which the latest version of this Privacy Notice was posted below. If you disagree with any changes to this Privacy Notice, you should stop using the Databricks Services and deactivate your Databricks account. How to Contact UsPlease contact us at [email protected] if you have any questions about our privacy practices or this Privacy Notice. You can also write to us at Databricks Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105 Attn: Privacy.If you interact with Databricks through or on behalf of your organization, then your personal information may also be subject to your organization’s privacy practices and you should direct any questions to that organization.Last updated: January 3, 2023ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/dataaisummit/speaker/nic-jansma
|
Nic Jansma - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingNic JansmaSenior Principal Lead Engineer at AkamaiBack to speakersNic is a software developer at Akamai building high-performance websites, apps and open-source tools, and co-chair of W3C Web Performance Working Group.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/dataaisummit/sponsors
|
Sponsors - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricing2023 SponsorsBecome a sponsorIf you are interested in sponsoring Data + AI Summit 2023, please contact our Sponsorship Management Team.
Organizing SponsorPrivacyPlatinum SponsorPrivacyPrivacyPrivacyDiamond SponsorsPrivacyPrivacyPrivacyPrivacyGold + SponsorsPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyGold SponsorsPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacySilver SponsorsPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyBronze SponsorsPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyPrivacyDon’t miss this year’s event!Register NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
https://www.databricks.com/discover/demos/unitycatalog
|
Unity Catalog Demo - DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWUnity Catalog DemosGet started for freeDatabricks Unity Catalog is a unified governance solution for all data and AI assets, including files, tables and machine learning models in your lakehouse on any cloud.Unity Catalog simplifies governance of data and AI assets on the Databricks Lakehouse Platform by providing fine-grained governance via a single standard interface based on ANSI SQL that works across clouds. With Unity Catalog, data teams benefit from a companywide catalog with centralized access permissions, audit controls, automated lineage, and built-in data search and discovery. Unity Catalog also natively supports Delta Sharing, an open standard for securely sharing live data from your lakehouse to any computing platform.Unity Catalog Overview Demo
In this brief demonstration, we give you a first look at Unity Catalog, a unified governance solution for all data and AI assets. Unity Catalog provides a single interface to centrally manage access permissions and audit controls for all data assets in your lakehouse, along with the capability to easily search, view lineage and share data.
Automated Data Lineage With Unity Catalog
Unity Catalog automatically tracks data lineage for all workloads in SQL, R, Python and Scala. Data lineage is captured down to the table and column levels and displayed in real time with just a few clicks. Unity Catalog also captures lineage for other data assets such as notebooks, workflows and dashboards. Lineage can be retrieved via REST API to support integrations with other data catalogs and governance tools.
Dive deeper into the Unity CatalogLearn moreLearn moreCreate your Databricks account1/2First nameLast NameEmailCompanyTitlePhone (Optional)SelectCountryGet StartedReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
|
https://www.databricks.com/fr/company/partners/data-partner-program
|
Programme Partenaire de données | DatabricksSkip to main contentPlateformeThe Databricks Lakehouse PlatformDelta LakeGouvernance des donnéesData EngineeringStreaming de donnéesEntreposage des donnéesPartage de donnéesMachine LearningData ScienceTarifsMarketplaceOpen source techCentre sécurité et confianceWEBINAIRE mai 18 / 8 AM PT
Au revoir, entrepôt de données. Bonjour, Lakehouse.
Assistez pour comprendre comment un data lakehouse s’intègre dans votre pile de données moderne.
Inscrivez-vous maintenantSolutionsSolutions par secteurServices financiersSanté et sciences du vivantProduction industrielleCommunications, médias et divertissementSecteur publicVente au détailDécouvrez tous les secteurs d'activitéSolutions par cas d'utilisationSolution AcceleratorsServices professionnelsEntreprises digital-nativesMigration des plateformes de données9 mai | 8h PT
Découvrez le Lakehouse pour la fabrication
Découvrez comment Corning prend des décisions critiques qui minimisent les inspections manuelles, réduisent les coûts d’expédition et augmentent la satisfaction des clients.Inscrivez-vous dès aujourd’huiApprendreDocumentationFORMATION ET CERTIFICATIONDémosRessourcesCommunauté en ligneUniversity AllianceÉvénementsSommet Data + IABlogLabosBeacons26-29 juin 2023
Assistez en personne ou connectez-vous pour le livestream du keynoteS'inscrireClientsPartenairesPartenaires cloudAWSAzureGoogle CloudContact partenairesPartenaires technologiques et de donnéesProgramme partenaires technologiquesProgramme Partenaire de donnéesBuilt on Databricks Partner ProgramPartenaires consulting et ISProgramme Partenaire C&SISolutions partenairesConnectez-vous en quelques clics à des solutions partenaires validées.En savoir plusEntrepriseOffres d'emploi chez DatabricksNotre équipeConseil d'administrationBlog de l'entreprisePresseDatabricks VenturesPrix et distinctionsNous contacterDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutiveObtenir le rapportEssayer DatabricksRegarder les démosNous contacterLoginJUNE 26-29REGISTER NOWProgramme de partenariat avec les fournisseurs de donnéesAvec Databricks, accédez au vaste écosystème ouvert des consommateurs de donnéesInscrivez-vous maintenantÀ partir d'une plateforme unique, Databricks aide ses partenaires fournisseurs de données à monétiser leurs assets de données auprès d'un vaste écosystème ouvert de consommateurs de données. Nos partenaires peuvent tirer parti de la plateforme Lakehouse de Databricks pour cibler davantage de clients, réduire les coûts et offrir une expérience de premier plan pour tous leurs besoins de partage de données.Avantages d’un partenariat avec les fournisseurs de donnéesAtteindre plus de consommateursUne couverture élargie à davantage de consommateurs de données à partir d'une plateforme ouverte et sécuriséeUne meilleure expérience clientTemps de configuration et d'activation réduit pour les consommateurs de donnéesSupport marketingUne exposition accrue grâce au soutien marketing de DatabricksTechnologie au service des produits de donnéesExploitation de la plateforme Lakehouse, leader sur le marché, pour les données, l'analytique et l'IAAccès aux produits et à l'équipe de R&DAccès aux équipes en charge des produits, du Data Engineering et de l'assistance de DatabricksSolutions pour l'industrieRencontrez nos équipes sectorielles pour élaborer des solutions adaptées à votre secteur d'activité et conçues spécialement pour les cas d'usage des clientsDelta Sharing pour les fournisseurs de donnéesDatabricks s'intègre en mode natif à Delta Sharing, le premier protocole ouvert au monde pour partager des données en toute sécurité entre les organisations, en temps réel et indépendamment de la plateforme sur laquelle elles se trouvent.Delta Sharing est approuvé par un large écosystèmeClients open sourceClients commerciauxBusiness IntelligenceAnalytiqueGouvernanceFournisseurs de donnéesPrêt à vous lancer ?Inscrivez-vous maintenantTrouver un partenaireProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterD écouvrez les offres d'emploi
chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie
|
https://www.databricks.com/cdn-cgi/l/email-protection
|
Email Protection | Cloudflare
Please enable cookies.
Email Protection
You are unable to access this email address databricks.com
The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. You must enable Javascript in your browser in order to decode the e-mail address.
If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare.
How does Cloudflare protect email addresses on website from spammers?
Can I sign up for Cloudflare?
Cloudflare Ray ID: 7c5c31da9bde0850
•
Your IP:
Click to reveal
2601:147:4700:3180:15eb:de93:22f5:f511
•
Performance & security by Cloudflare
|
https://www.databricks.com/dataaisummit/speaker/karthik-ramasamy/#
|
Karthik Ramasamy - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingKarthik RamasamyHead of Streaming at DatabricksBack to speakersKarthik Ramasamy is the Head of Streaming at Databricks. Before joining Databricks, he was a Senior Director of Engineering, managing the Pulsar team at Splunk. Before Splunk, he was the co-founder and CEO of Streamlio that focused on building next-generation event processing infrastructure using Apache Pulsar and led the acquisition of Streamlio by Splunk. Before Streamlio, he was the engineering manager and technical lead for real-time infrastructure at Twitter where he co-created Twitter Heron, which was open sourced and used by several companies. He has two decades of experience working with companies such as Teradata, Greenplum and Juniper in their rapid growth stages building parallel databases, big data infrastructure and networking. He co-founded Locomatix, a company that specializes in real-time streaming processing on Hadoop and Cassandra using SQL, which was acquired by Twitter. Karthik has a Ph.D. in computer science from the University of Wisconsin, Madison, with a focus on big data and databases. During his college tenure, several of the research projects he participated in were later spun off as a company acquired by Teradata. Karthik is the author of several publications, patents and a popular book, Network Routing: Algorithms, Protocols and Architectures.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.