This document discusses SQL Azure and Windows Azure Storage. SQL Azure allows storing databases in the cloud with high availability and load balancing. Windows Azure Storage provides durable cloud storage for blobs, disks, tables and queues. It replicates data across multiple datacenters for high availability and scales massively to store large amounts of unstructured and structured data.
This document discusses Azure Resource Manager templates, which provide a declarative and automated way to deploy resources in Azure. Some key points:
- ARM templates define the deployment of Azure resources through a JSON file, allowing deployments to be automated, repeatable, and easy to manage.
- Templates use parameters for user input, variables for reuse, and outputs to capture deployment results. Expressions allow dynamic values.
- Template execution establishes dependencies between resources through functions like dependsOn and reference.
- Templates can be linked to decompose deployments and allow reuse of common configurations. State can be passed between templates through parameters, variables, and outputs.
This document discusses various features of Microsoft Azure Websites including:
- Language support for developing apps with .NET, Python, Node.js, Java, and PHP.
- Deployment options including manual and auto-scaling of instances. Auto-scaling can dynamically scale the web tier based on CPU, memory, and other metrics.
- Additional features like staging environments, web jobs, traffic manager for intelligent routing, backups, and hybrid connections.
- Services that can be used with web sites like Redis Cache, Application Insights, and Debug Console.
- Customizing deployments with deployment scripts and site extensions.
- Fortune 500 companies and over a million developers use Azure Web Sites and
Cloud services provide scalability, availability, and reliability so that applications can focus on their code. A cloud service uses public endpoints for external access, internal endpoints for private communication between roles, and instance input endpoints for individual instances. Roles in a cloud service can communicate through HTTP and provide web and worker functionality. Designing for the cloud requires embracing errors, and ensuring availability, reliability, and scalability through redundancy, reliability features in Azure like auto-recovery, and handling transient errors.
This document provides an overview of Azure Virtual Machines including:
- Launching Windows and Linux VMs in minutes and scaling from 1 to 1000s of instances with per-minute billing.
- A gallery of prebuilt images for workloads like SQL Server, SharePoint, and SAP HANA.
- VM sizes that range from shared core/768MB RAM to 16 cores/112GB RAM.
- Features like extensions, disks, availability sets, load balancing, and cross-premises connectivity.
- Disaster recovery options like replication to secondary sites and orchestrated failover to Azure.
This document discusses Microsoft Azure Mobile Services, which provides a backend platform for building and managing mobile apps. It includes features for storage, authentication, push notifications, scheduling jobs, and more. The document demonstrates how to get started with Mobile Services, customize backend logic, add authentication, and scale the services. It also provides an overview of the Azure Mobile Services architecture and pricing tiers.
This document discusses Microsoft automation tools including Service Management Automation, PowerShell workflows, Azure Automation, and PowerShell Desired State Configuration. It provides an overview of each tool's architecture and capabilities. The document demonstrates how to author PowerShell workflows using tools like the Azure Automation Authoring Toolkit. It also demonstrates PowerShell DSC and how to configure systems using a pull server model both on-premises and with Azure Automation DSC in the cloud. The key takeaway is that Microsoft provides a comprehensive set of automation tools to configure, manage, and automate hybrid cloud environments.
This document discusses why Java is a good option for developing on the Azure cloud platform. It notes that Azure provides SDKs and tooling to support Java development and that there are new developments like HDInsight and Azure Search that support Java. The document also shares statistics about Azure's growth and momentum in the cloud market.
This document provides an overview of SQL Azure and Windows Azure Storage. SQL Azure allows hosting SQL databases in the cloud with high availability, load balancing, and provisioning via a portal or REST API. Applications can access SQL Azure databases using standard SQL libraries. Windows Azure Storage provides durable, highly available cloud storage for blobs, tables, queues and disks. It features global replication for redundancy and uses load balancing to scale out storage worldwide.
This document discusses Azure Resource Manager templates, which provide a declarative and automated way to deploy resources in Azure. Some key points:
- ARM templates define the deployment of Azure resources through a JSON file, allowing deployments to be automated, repeatable, and easy to manage.
- Templates use parameters for user input, variables for reuse, and outputs to capture deployment results. Expressions allow dynamic values.
- Template execution establishes dependencies between resources through functions like dependsOn and reference.
- Templates can be linked to decompose deployments and allow reuse of common configurations. State can be passed between templates through parameters, variables, and outputs.
This document discusses various features of Microsoft Azure Websites including:
- Language support for developing apps with .NET, Python, Node.js, Java, and PHP.
- Deployment options including manual and auto-scaling of instances. Auto-scaling can dynamically scale the web tier based on CPU, memory, and other metrics.
- Additional features like staging environments, web jobs, traffic manager for intelligent routing, backups, and hybrid connections.
- Services that can be used with web sites like Redis Cache, Application Insights, and Debug Console.
- Customizing deployments with deployment scripts and site extensions.
- Fortune 500 companies and over a million developers use Azure Web Sites and
Cloud services provide scalability, availability, and reliability so that applications can focus on their code. A cloud service uses public endpoints for external access, internal endpoints for private communication between roles, and instance input endpoints for individual instances. Roles in a cloud service can communicate through HTTP and provide web and worker functionality. Designing for the cloud requires embracing errors, and ensuring availability, reliability, and scalability through redundancy, reliability features in Azure like auto-recovery, and handling transient errors.
This document provides an overview of Azure Virtual Machines including:
- Launching Windows and Linux VMs in minutes and scaling from 1 to 1000s of instances with per-minute billing.
- A gallery of prebuilt images for workloads like SQL Server, SharePoint, and SAP HANA.
- VM sizes that range from shared core/768MB RAM to 16 cores/112GB RAM.
- Features like extensions, disks, availability sets, load balancing, and cross-premises connectivity.
- Disaster recovery options like replication to secondary sites and orchestrated failover to Azure.
This document discusses Microsoft Azure Mobile Services, which provides a backend platform for building and managing mobile apps. It includes features for storage, authentication, push notifications, scheduling jobs, and more. The document demonstrates how to get started with Mobile Services, customize backend logic, add authentication, and scale the services. It also provides an overview of the Azure Mobile Services architecture and pricing tiers.
This document discusses Microsoft automation tools including Service Management Automation, PowerShell workflows, Azure Automation, and PowerShell Desired State Configuration. It provides an overview of each tool's architecture and capabilities. The document demonstrates how to author PowerShell workflows using tools like the Azure Automation Authoring Toolkit. It also demonstrates PowerShell DSC and how to configure systems using a pull server model both on-premises and with Azure Automation DSC in the cloud. The key takeaway is that Microsoft provides a comprehensive set of automation tools to configure, manage, and automate hybrid cloud environments.
This document discusses why Java is a good option for developing on the Azure cloud platform. It notes that Azure provides SDKs and tooling to support Java development and that there are new developments like HDInsight and Azure Search that support Java. The document also shares statistics about Azure's growth and momentum in the cloud market.
This document provides an overview of SQL Azure and Windows Azure Storage. SQL Azure allows hosting SQL databases in the cloud with high availability, load balancing, and provisioning via a portal or REST API. Applications can access SQL Azure databases using standard SQL libraries. Windows Azure Storage provides durable, highly available cloud storage for blobs, tables, queues and disks. It features global replication for redundancy and uses load balancing to scale out storage worldwide.
Azure Virtual Machines Deployment ScenariosBrian Benz
Architecture and Scenarios for deploying Database and middleware applications on Azure Virtual Machines including SQL Server, Oracle, Hadoop, and others.
Cnam azure 2014 web sites et integration continueAymeric Weinbach
This document discusses Windows Azure Web Sites, which provide a platform for hosting web applications on Microsoft's cloud computing platform. It describes the architecture of Azure Web Sites including deployment via FTP or source control. It also demonstrates configuring automated deployments from GitHub to different environments like development, staging, and production using scripts. This allows for continuous deployment across environments from a source code repository.
An overview on current Microsoft Technologies around Private - / Hybrid-Clouds and what's coming up with the next version aka Azure Stack from our session at e2evc Berlin.
Experiences using CouchDB inside Microsoft's Azure teamBrian Benz
Co-presented with Will Perry (@willpe). Real-world experiences using CouchDB inside Microsoft, and also how to get started with CouchDB on Microsoft Azure.
Introduction into Windows Azure Pack and Service Management AutomationMichael Rüefli
This document discusses Windows Azure Pack and Service Management Automation. It provides an overview of private cloud solutions and capabilities brought by Windows Azure Pack such as Infrastructure as a Service, Platform as a Service, and Software as a Service. It also summarizes key features of Service Management Automation for automating and extending private cloud services. The document concludes with a demonstration of Windows Azure Pack capabilities for infrastructure as a service.
Tech Ed North America 2014 - Java on AzureBrian Benz
Microsoft Azure provides support for running Java workloads through Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and services. IaaS offers virtual machines with official Oracle JDK versions and pre-configured images. PaaS uses a 64-bit OpenJDK build by Azul and provides deployment and management tools. Services include the Azure SDK for Java for storage, queues, databases, and more. Microsoft has partnerships with Oracle and Azul to support Java on Azure.
Mongo db world 2014 nyc mongodb on azure - tips tricks and examplesBrian Benz
This document discusses MongoDB options on Azure, including running MongoDB on virtual machines or using hosted MongoDB services. It provides an example of how to provision a MongoDB replica set across multiple Azure virtual machines and configure endpoints. Automation support is available for deploying and configuring MongoDB on Azure virtual machines. Links are also included for tutorials and other resources on MongoDB and Azure.
This document provides an overview of Azure Virtual Machines, including how to provision VMs, available VM sizes and pricing, data persistence options, high availability features, networking capabilities, and load balancing options. Key points include being able to launch Windows and Linux VMs in minutes and scale from 1 to 1000s of instances with per-minute billing. VM extensions enable customization, and VMs can be made highly available through features like availability sets and fault domains. Virtual networks allow creating protected private networks in Azure that can connect to on-premises environments.
Overview of Windows Azure Virtual Machines - the IaaS offering in the Windows Azure platform. The presentation covers the compute, storage and network features of Virtual Machines. It also describes how best to deploy Windows Azure cloud services and VMs.
Service Management Automation (SMA) from zero to heroMichael Rüefli
An introduction on the architecture, deployment and best practice deploying SMA to automate clouds and datacenters. How to install is covered too as well as basics on Powershell workflows.
AWS vs. Azure vs. Google vs. SoftLayer: Network, Storage and DBaaSRightScale
Most enterprises have a multi-cloud strategy, but choosing the right cloud for a workload can be challenging. In a previous deck we covered differences in block/object storage, pricing, and container services. In this deck we’ll drill down on archival storage, database-as-a-service (DBaaS), and networking options for the leading public clouds.
MSHOWTO ile Tech Summit 1'de Bende Özgür Çebi ile birlikte Citrix on Azure oturumunu gerçekleştirdim. Bu oturuma ait sunumu bu adresten inceleyebilirsiniz.
E2EVC 2014 building clouds with Microsoft Cloud OS and System CenterMichael Rüefli
The document provides an overview of Microsoft's cloud operating system stack and its components. It discusses the architecture of Azure Pack and System Center, which includes the virtual machine manager, networking, storage, hypervisor and automation. It describes how these components work together to provide a software-defined infrastructure that can run workloads for multiple tenants. The document also highlights demonstrations of storage management, software-defined networking and service management automation.
Introduction To Cloud Computing Winsows Azure101Mithun T. Dhar
The Windows Azure platform is a set of high-performance cloud computing services that can be used together or independently and enable developers to leverage existing skills and familiar tools to develop cloud applications. In this session, we’ll provide a developer-focused overview of this new online service computing platform. We’ll explore the components, key features and real day-to-day benefits of Windows Azure.
Highlights include:
· What is cloud computing?
· Running web and web service applications in the cloud
· Using the Windows Azure and local developer cloud fabric
· Getting started – tools, SDKs and accounts
· Writing applications for Windows Azure
ContainerDays NYC 2016: "Containers in Azure: Understanding the Microsoft Con...DynamicInfraDays
This document discusses Microsoft's container ecosystem. It covers Docker for Windows, Windows containers, Azure Container Service (ACS), and running .NET on Linux. Docker for Windows allows running Docker natively on Windows using Hyper-V. Windows containers leverage the Windows kernel for isolation using namespaces and control groups. ACS simplifies deploying Docker clusters to Azure. .NET Core allows developing .NET applications on Linux. The document also briefly mentions Docker Datacenter and Enterprise DC/OS as enterprise container solutions.
CloudStack is an open source cloud computing platform that provides infrastructure as a service. It was originally formed in 2008 as VMOps and was later acquired by Citrix in 2011. CloudStack allows for on-demand provisioning of computing resources in a multi-tenant environment with high availability and supports various hypervisors including KVM, XenServer, and VMware. It provides APIs to manage and automate the provisioning of virtual servers, load balancing, firewalls, storage, and networking.
The document provides an overview of Azure App Fabric Caching. It discusses that the caching service is a distributed, in-memory application cache that can accelerate Azure applications. It can be used for frequently accessed data caching, ASP.Net session state, and output caching. The document reviews configuration options, usage, session state provider considerations, tracing, understanding quotas and errors, local caching options, and limitations.
The document provides an overview of the Windows Azure Platform. It describes the client, integration, and application layers that make up the platform. It also outlines the data services available, including storage, databases, computing resources, and networking capabilities. Finally, it discusses high availability and deployment options for ensuring reliability and uptime of applications and services built on the Azure platform.
Facebook was founded in 2004 in the United States and has over 800 million users, making it the most popular social network globally. It was started by Mark Zuckerberg and introduced the timeline feature in 2011. Google launched Google+ in June 2011 as a social network combining features of Google's previous social sites. It uses circles to organize contacts and allow sharing across Google services, hangouts for group video chat, and messenger for instant messaging between circles.
Azure Virtual Machines Deployment ScenariosBrian Benz
Architecture and Scenarios for deploying Database and middleware applications on Azure Virtual Machines including SQL Server, Oracle, Hadoop, and others.
Cnam azure 2014 web sites et integration continueAymeric Weinbach
This document discusses Windows Azure Web Sites, which provide a platform for hosting web applications on Microsoft's cloud computing platform. It describes the architecture of Azure Web Sites including deployment via FTP or source control. It also demonstrates configuring automated deployments from GitHub to different environments like development, staging, and production using scripts. This allows for continuous deployment across environments from a source code repository.
An overview on current Microsoft Technologies around Private - / Hybrid-Clouds and what's coming up with the next version aka Azure Stack from our session at e2evc Berlin.
Experiences using CouchDB inside Microsoft's Azure teamBrian Benz
Co-presented with Will Perry (@willpe). Real-world experiences using CouchDB inside Microsoft, and also how to get started with CouchDB on Microsoft Azure.
Introduction into Windows Azure Pack and Service Management AutomationMichael Rüefli
This document discusses Windows Azure Pack and Service Management Automation. It provides an overview of private cloud solutions and capabilities brought by Windows Azure Pack such as Infrastructure as a Service, Platform as a Service, and Software as a Service. It also summarizes key features of Service Management Automation for automating and extending private cloud services. The document concludes with a demonstration of Windows Azure Pack capabilities for infrastructure as a service.
Tech Ed North America 2014 - Java on AzureBrian Benz
Microsoft Azure provides support for running Java workloads through Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and services. IaaS offers virtual machines with official Oracle JDK versions and pre-configured images. PaaS uses a 64-bit OpenJDK build by Azul and provides deployment and management tools. Services include the Azure SDK for Java for storage, queues, databases, and more. Microsoft has partnerships with Oracle and Azul to support Java on Azure.
Mongo db world 2014 nyc mongodb on azure - tips tricks and examplesBrian Benz
This document discusses MongoDB options on Azure, including running MongoDB on virtual machines or using hosted MongoDB services. It provides an example of how to provision a MongoDB replica set across multiple Azure virtual machines and configure endpoints. Automation support is available for deploying and configuring MongoDB on Azure virtual machines. Links are also included for tutorials and other resources on MongoDB and Azure.
This document provides an overview of Azure Virtual Machines, including how to provision VMs, available VM sizes and pricing, data persistence options, high availability features, networking capabilities, and load balancing options. Key points include being able to launch Windows and Linux VMs in minutes and scale from 1 to 1000s of instances with per-minute billing. VM extensions enable customization, and VMs can be made highly available through features like availability sets and fault domains. Virtual networks allow creating protected private networks in Azure that can connect to on-premises environments.
Overview of Windows Azure Virtual Machines - the IaaS offering in the Windows Azure platform. The presentation covers the compute, storage and network features of Virtual Machines. It also describes how best to deploy Windows Azure cloud services and VMs.
Service Management Automation (SMA) from zero to heroMichael Rüefli
An introduction on the architecture, deployment and best practice deploying SMA to automate clouds and datacenters. How to install is covered too as well as basics on Powershell workflows.
AWS vs. Azure vs. Google vs. SoftLayer: Network, Storage and DBaaSRightScale
Most enterprises have a multi-cloud strategy, but choosing the right cloud for a workload can be challenging. In a previous deck we covered differences in block/object storage, pricing, and container services. In this deck we’ll drill down on archival storage, database-as-a-service (DBaaS), and networking options for the leading public clouds.
MSHOWTO ile Tech Summit 1'de Bende Özgür Çebi ile birlikte Citrix on Azure oturumunu gerçekleştirdim. Bu oturuma ait sunumu bu adresten inceleyebilirsiniz.
E2EVC 2014 building clouds with Microsoft Cloud OS and System CenterMichael Rüefli
The document provides an overview of Microsoft's cloud operating system stack and its components. It discusses the architecture of Azure Pack and System Center, which includes the virtual machine manager, networking, storage, hypervisor and automation. It describes how these components work together to provide a software-defined infrastructure that can run workloads for multiple tenants. The document also highlights demonstrations of storage management, software-defined networking and service management automation.
Introduction To Cloud Computing Winsows Azure101Mithun T. Dhar
The Windows Azure platform is a set of high-performance cloud computing services that can be used together or independently and enable developers to leverage existing skills and familiar tools to develop cloud applications. In this session, we’ll provide a developer-focused overview of this new online service computing platform. We’ll explore the components, key features and real day-to-day benefits of Windows Azure.
Highlights include:
· What is cloud computing?
· Running web and web service applications in the cloud
· Using the Windows Azure and local developer cloud fabric
· Getting started – tools, SDKs and accounts
· Writing applications for Windows Azure
ContainerDays NYC 2016: "Containers in Azure: Understanding the Microsoft Con...DynamicInfraDays
This document discusses Microsoft's container ecosystem. It covers Docker for Windows, Windows containers, Azure Container Service (ACS), and running .NET on Linux. Docker for Windows allows running Docker natively on Windows using Hyper-V. Windows containers leverage the Windows kernel for isolation using namespaces and control groups. ACS simplifies deploying Docker clusters to Azure. .NET Core allows developing .NET applications on Linux. The document also briefly mentions Docker Datacenter and Enterprise DC/OS as enterprise container solutions.
CloudStack is an open source cloud computing platform that provides infrastructure as a service. It was originally formed in 2008 as VMOps and was later acquired by Citrix in 2011. CloudStack allows for on-demand provisioning of computing resources in a multi-tenant environment with high availability and supports various hypervisors including KVM, XenServer, and VMware. It provides APIs to manage and automate the provisioning of virtual servers, load balancing, firewalls, storage, and networking.
The document provides an overview of Azure App Fabric Caching. It discusses that the caching service is a distributed, in-memory application cache that can accelerate Azure applications. It can be used for frequently accessed data caching, ASP.Net session state, and output caching. The document reviews configuration options, usage, session state provider considerations, tracing, understanding quotas and errors, local caching options, and limitations.
The document provides an overview of the Windows Azure Platform. It describes the client, integration, and application layers that make up the platform. It also outlines the data services available, including storage, databases, computing resources, and networking capabilities. Finally, it discusses high availability and deployment options for ensuring reliability and uptime of applications and services built on the Azure platform.
Facebook was founded in 2004 in the United States and has over 800 million users, making it the most popular social network globally. It was started by Mark Zuckerberg and introduced the timeline feature in 2011. Google launched Google+ in June 2011 as a social network combining features of Google's previous social sites. It uses circles to organize contacts and allow sharing across Google services, hangouts for group video chat, and messenger for instant messaging between circles.
The weekly report summarizes the progress of a student group working on a product design. The student was assigned to design the power supply and developed a circuit using IC regulators to produce 5V, 9V, 12V and 4.5V outputs from the input voltage. Their power supply design uses IC7805, IC7809 and a voltage regulator to achieve the desired voltages. Next steps are to design the voltage regulation circuit and further study the LM317 datasheet for the 4.5V output.
El documento habla sobre las normas en la cafetería de la escuela Elvira Zipitria. Los estudiantes deben seguir las mismas reglas durante el almuerzo que durante las horas de clase. El Comité de Comedor y el Consejo Escolar han aprobado una normativa que detalla los tipos de conducta aceptable y las medidas correctivas. La persona a cargo de la cafetería y los monitores se asegurarán de que los estudiantes sigan las reglas de la cafetería y respeten los derechos y deberes de los demás.
El documento presenta la Dirección de Comunicación Social y Relaciones Públicas del Ayuntamiento de Mérida, incluyendo su misión de promover y difundir las actividades de la administración municipal de forma clara y objetiva. Se detallan las cuatro subdirecciones que la componen y sus respectivos directores, con información de contacto como nombre, puesto y extensión telefónica. Finalmente, se incluye un organigrama con la estructura jerárquica de la dirección.
This document announces a raffle to support the JETs Freeride Team at the USASA Nationals at Copper Mountain in Colorado. The raffle features prizes including a GoPro camera, skiing and snowboarding equipment, and helmets. Tickets can be purchased for $5 each or 3 for $10 at local stores, online, or from Freeride Team members on specified weekends. The drawing will be held on April 3rd.
El Carnaval de Venecia de 2009 se celebró del 31 de enero al 17 de febrero. Este evento anual atrae a miles de visitantes a la ciudad de Venecia para disfrutar de máscaras elaboradas, desfiles, bailes y comidas. El Carnaval de Venecia es una celebración histórica y cultural que data de siglos atrás.
The media product portrays two 18-19 year old characters through their casual clothing and dialogue using slang words and joking interactions. The characters are meant to represent stereotypical teenagers based on how they dress and act casually through their conversation and clothing choices in the thriller production.
Adventures with the One-Point Distribution FunctionPeter Coles
Talk given at "From Inflation to Galaxies", workshop in honour of the 60th Birthday of Sabino Matarrese, Castiglioncello, Italy, 31st August to 3rd September 2015
Este documento presenta una introducción a las técnicas didácticas, describiendo que son procedimientos que ayudan al aprendizaje siguiendo pasos ordenados. Explica que tienen la característica de facilitar el aprendizaje y la participación del alumno, desarrollando su autonomía. Además, clasifica las técnicas en exposición, método de proyectos, método de casos, método de preguntas, simulación y juego, aprendizaje basado en problemas, juego de roles y panel de discusión. Finalmente,
La redes de la administración y el punto de vista del usuarioImma Aguilar Nàcher
Este documento describe cómo las redes sociales y el ciberactivismo están transformando la participación ciudadana y las formas de protesta. Señala que el ciberactivismo es una forma legítima de activismo que puede impulsar acciones tanto en línea como fuera de línea. El documento también destaca algunas características clave de este nuevo tipo de movilización ciudadana mediada por las redes, como el énfasis en la colaboración, la gestión de emociones colectivas, el uso de lenguajes y tácticas novedosas, y la búsqu
Comunicación de la nueva política: Tecnopolítica y emocionesImma Aguilar Nàcher
El documento habla sobre la comunicación política en la era digital y el concepto de "tecnopolítica". Explica que la tecnopolítica implica el uso estratégico de herramientas digitales para la organización, comunicación y acción colectiva con el fin de movilizar a los ciudadanos y lograr cambios. Se enfoca en tres aspectos clave: 1) controlar la reputación política, 2) influir en los medios de comunicación, y 3) provocar la acción y reacción de los votantes a través de contenidos virales que generen emociones.
The document discusses Amazon Aurora, a database service from AWS that is compatible with PostgreSQL and MySQL. It provides summaries of Aurora's architecture, performance advantages, and customer benefits compared to traditional databases. Specifically, the document notes that Aurora achieves higher performance and availability than PostgreSQL by using a distributed, scalable storage system and replicating data across Availability Zones. It shares performance test results showing that Aurora can be up to 3x faster than PostgreSQL for various workloads. Customers have also cited lower costs and easier management with Aurora compared to commercial databases.
This document provides guidance and best practices for using Infrastructure as a Service (IaaS) on Microsoft Azure for database workloads. It discusses key differences between IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). The document also covers Azure-specific concepts like virtual machine series, availability zones, storage accounts, and redundancy options to help architects design cloud infrastructures that meet business requirements. Specialized configurations like constrained VMs and ultra disks are also presented along with strategies for ensuring high performance and availability of database workloads on Azure IaaS.
Design, Deploy, and Optimize SQL Server on AWS - AWS Online Tech TalksAmazon Web Services
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
Design, Deploy, and Optimize SQL Server on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to build applications on AWS from a strong foundation on SQL Server
- Learn when to deploy SQL Server on Amazon EC2 versus Amazon RDS
- Learn how to take advantage of the latest features in SQL Server 2016 when running on AWS
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
This document discusses Microsoft SQL Server options in Azure. It begins by explaining the differences between Azure SQL and on-premises SQL Server, noting that Azure SQL is based on the latest SQL Server Enterprise version in a PaaS model and is not fully compatible with on-premises SQL Server. It then outlines the various options for SQL in Azure, including SQL Server on VMs, containers, and Azure SQL with DTU or vCore pricing/scaling models. The document provides details on features, pricing tiers, scaling, security, and other considerations for using SQL in Azure. It concludes that while migration may require adjustments, Azure SQL provides many advantages over on-premises SQL Server.
This document discusses best practices for migrating database workloads to Azure Infrastructure as a Service (IaaS). Some key points include:
- Choosing the appropriate VM series like E or M series optimized for database workloads.
- Using availability zones and geo-redundant storage for high availability and disaster recovery.
- Sizing storage correctly based on the database's input/output needs and using premium SSDs where needed.
- Migrating existing monitoring and management tools to the cloud to provide familiarity and automating tasks like backups, patching, and problem resolution.
Amazon Aurora adds PostgreSQL compatibility to its cloud-optimized relational database. With PostgreSQL compatibility, customers can now choose to use Amazon's database with the performance and availability of commercial databases and the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides high performance, durability, availability and automatic scaling capabilities for PostgreSQL workloads.
Cloud computing UNIT 2.1 presentation inRahulBhole12
Cloud storage allows users to store files online through cloud storage providers like Apple iCloud, Dropbox, Google Drive, Amazon Cloud Drive, and Microsoft SkyDrive. These providers offer various amounts of free storage and options to purchase additional storage. They allow files to be securely uploaded, accessed, and synced across devices. The best cloud storage provider depends on individual needs and preferences regarding storage space requirements and features offered.
Oracle Real Application Cluster (RAC) allows multiple instances of an Oracle database to run simultaneously on multiple nodes. It provides high availability, scalability, and transparent application failover. Key components include shared storage, Oracle Clusterware, cache fusion for data synchronization, and Transparent Application Failover for uninterrupted connections.
The event, held on 27th April 2019, was part of the Global Azure Bootcamp and covered Microsoft's Cosmos DB, more specifically:
- Introduction to Cosmos DB, its features, internals, resource models, and request units.
- DEMO: Create an SQL API. Download sample .NET app. Simple queries.
- Covered Change Feed and showcased various use case scenarios.
- Detailed Global Distribution and Consistency Models implications.
- DEMO: Mongo - Lift and shift. Run simple .NET code against a MongoDB (in docker container) and cosmos.
- Introduction to Tinkerpop graphs
- DEMO: Graphs API. Download sample .NET app. Simple queries.
https://techspark.mt/global-azure-bootcamp-27th-april-2019/
Deep Dive on the Amazon Aurora MySQL-compatible Edition - DAT301 - re:Invent ...Amazon Web Services
The Amazon Aurora MySQL-compatible Edition is a fully managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It is purpose-built for the cloud using a new architectural model and distributed systems techniques. It provides far higher performance, availability, and durability than previously possible using conventional monolithic database architectures. Amazon Aurora packs a lot of innovations in the engine and storage layers. In this session, we do a deep-dive into some key innovations behind Amazon Aurora MySQL-compatible edition. We explore new improvements to the service and discuss best practices and optimal configurations.
MS Cloud Day - Building web applications with Azure storageSpiffy
This document provides an overview and agenda for a Microsoft Cloud Day session on building web applications with Azure Storage. The session will cover Blob, Table, and Queue storage capabilities in Azure, including how to create storage accounts, upload and retrieve blobs, create and query tables, and use queues for communication between services. Attendees will learn best practices for scalability when using Azure Storage.
Building Real World Application with Azuredivyapisces
This document discusses building real world applications with Microsoft Azure. It covers cloud development patterns like automating everything, using source control, continuous integration and delivery, and web development best practices. It demonstrates the Azure management portal and shows how to use source control in Visual Studio. The document also discusses data storage options in Azure like SQL Database, blob storage, and data partitioning strategies. It provides an overview of key concepts like PaaS versus IaaS, choosing relational databases, and understanding the three Vs of data storage. Finally, it demonstrates walking through an Azure app and discusses SLAs and scaling applications in the cloud.
This document provides an overview of migrating applications and workloads to AWS. It discusses key considerations for different migration approaches including "forklift", "embrace", and "optimize". It also covers important AWS services and best practices for architecture design, high availability, disaster recovery, security, storage, databases, auto-scaling, and cost optimization. Real-world customer examples of migration lessons and benefits are also presented.
온디맨드 다시보기: https://www.youtube.com/watch?v=dBLv4V3hRRQ
엔터프라이즈 미션 크리티컬 시스템을 위해서 다양한 고객 환경에서 Oracle RAC와 같은 대용량 데이터베이스가 운영중에 있습니다. 클라우드를 도입하는 많은 고객이 이러한 대용량 데이터베이스 환경을 클라우드 네이티브 서비스가 이를 대체할 수 있을지에 많은 의구심을 가지고 있습니다. AWS의 클라우드 네이티브 데이터베이스 서비스의 기술적인 관점에서 대용량 데이터 운영 관리의 특성을 살펴 보고 Oracle RAC의 완벽한 대체제로 충분한 역량을 가지고 있음을 소개합니다.
AWS January 2016 Webinar Series - Amazon Aurora for Enterprise Database Appli...Amazon Web Services
Amazon Aurora is a relational database service built from the ground up for the cloud. It is fully managed by AWS and provides enterprise-class availability, security, and performance while being simple and cost-effective. Aurora is designed to automatically scale throughput and storage, provide continuous backups, automated patching and replication across availability zones. It offers up to 15 low-latency read replicas and supports databases up to 64TB in size. Customers like Expedia and Alfresco are using Aurora to power their mission critical workloads at scale in a cost-effective manner compared to commercial databases.
Ultimate SharePoint Infrastructure Best Practises Session - Isle of Man Share...Michael Noel
This document summarizes best practices for SharePoint infrastructure design presented by Michael Noel. It discusses small, medium, and large farm models with separate web, app, and database servers. Hybrid cloud scenarios including one-way and two-way topologies are presented. Ensuring high availability through techniques like SQL AlwaysOn, database mirroring, and network load balancing is also covered. The presentation concludes with discussions of security best practices, documentation, and virtualization performance monitoring.
Revolutionary Storage for Modern Databases, Applications and Infrastrcturesabnees
Sanjay Sabnis presented on next generation storage solutions for modern big data applications. He discussed how NVMe storage provides significantly higher performance than SATA, with speeds over 6x faster for reads and over 40x faster for writes. Pavilion Data offers an all-NVMe rack scale storage array that provides 120GB/s of throughput with DAS-level latency. This solution can meet the performance and scalability demands of big data workloads like MongoDB, Splunk, and containerized applications.
Ma conférence Serverless everywhere avec Azure Functions et Dapr pour Devoxx France 2021
Pour des scénarios IoT, hybride et multicloud, faisons un tour d’horizon sur les dernières nouveautés Serverless de Microsoft . Et explorons ensemble les nouvelles opportunités offertes pour les microservices avec DAPR . Et voyons comment pousser le système en tirant parti des deux combinés.
Global Azure is the biggest Microsoft Azure community event with over 10,000 people from 192 locations across 57 countries. The agenda includes an introduction to IoT, prototyping connected objects, Azure building blocks, a demo, and some code. When building IoT solutions, choices must be made around how devices are powered and connected to cloud services, and what protocols are used to encode and transmit data. Event Hubs and Stream Analytics can be used to process IoT data at scale from various sources in the cloud. The NAO robot is proposed as an interface for an ambient intelligence weather station prototype that collects data from sensors via AMQP and displays information through HTTP requests.
This document summarizes Microsoft Azure cloud computing services. It discusses Azure's global datacenter infrastructure and growth, the services it provides including computing, storage, networking and platforms, and its certifications and compliance with standards for security, privacy and government use. Examples of customers using Azure include Microsoft itself for Skype, Halo and Office, as well as other companies. The presentation encourages attendees to test Azure services for free through various trial offers.
This document discusses cross-platform support and push notifications in Windows Azure Mobile Services. It explains how to send push notifications to different device platforms including Windows Store, Windows Phone, iOS, and Android. It also discusses using service filters and delegating handlers to intercept requests and responses for custom processing like adding versioning information.
This document discusses virtual machines and storage options on Windows Azure. It notes that Azure virtual machines can have persistent drives stored in Windows Azure Storage, which provides reliability and geo-replication of data across multiple data centers. The document also lists the different sizes of virtual machines available on Azure, varying in their number of CPU cores, amount of memory, bandwidth, and number of allowed data disks.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
2. SQL Azure. Une ou plusieurs bases.
Database
Database
Database
Application
Application
Database
SQL Azure Database
3. Implémentation
Application
Internet
LBTDS (tcp)
TDS (tcp)
TDS (tcp)
Les applications utilisent les librairies
standards d’accès SQL : ODBC,
ADO.Net, PHP, …
Les load balancer répartissent la charge
sur les passerelles TDS en tenant compte
des affinités de session
Gateway Gateway Gateway Gateway Gateway Gateway
Scalability and Availability: Fabric, Failover, Replication, and Load balancing
SQL SQL SQL SQL SQLSQL
Gateway: TDS protocol gateway, enforces AUTHN/AUTHZ policy; proxy to backend SQL
4. Sql Server dans les nuages avec ses avantages :
Provisioning simple
Via le portail
Via l’API REST
Haute disponibilité
Load Balancing
Protocole TDS (le même que SQL Server) pour tout le reste sur
SSL (crypté)
Sql Azure
5. Vous n’avez pas accès à tout ce qui est physique (filegroup …)
Pas de CLR
Pas de transactions distribuées
Pas de service Broker
Les différences avec Sql Server
6. Implémenter une politique de Retry
Facturation de la bande passante donc utiliser dés que
possible :
Lazy loading
Cache
Développer avec Sql Azure
8. Run SQL onVM
•Run any SQL product on cloudVM
•Support for SQL Server, Oracle, MySql
•Ready to goVM images available in Gallery
•Persistent storage using attached disk in blob storage
8
MicrosoftAzure
10. Windows Azure Storage
• Cloud Storage - Anywhere and anytime access
• Blobs, Disks,Tables and Queues
• Highly Durable, Available and Massively Scalable
• Easily build “internet scale” applications
• 8.5 trillion stored objects
• 900K request/sec on average (2.3+ trillion per month)
• Pay for what you use
• Exposed via easy and open REST APIs
• Client libraries in .NET, Java, Node.js, Python, PHP, Ruby
14. Pour stocker vos fichiers petits ou très grands
Les blocks blobs pour les fichiers image, vidéo etc.. 200 GB max
Les page blobs optimisé pour la lecture écriture rapide 1Tb Max
Les Azure Drives : un disque NTFS que vous pouvez « monter »
dans votre rôle et qui est sauvegardé automatiquement dans un
page blob
Blob Storage
15. CDN avec smooth streaming pour les vidéos
Les blobs sont dans des containers
Accès public, ou privé
Snapshot
Shared access signature
Lease
22. Table Storage
1 seul index le couple PartitionKey/RowKey
Transactions possibles au sein d’une même partition
ODATA + authentification
Sdk .net opensource
https://github.com/WindowsAzure/azure-sdk-for-net
API REST
Table non relationnelle
Schéma flexible ( plusieurs versions de schéma peuvent cohabiter dans la même table)
23. Queue typical usage
Queue
Web Role
ASP.NET,
WCF, etc.
Worker Role
main()
{ … }
1) Receive work
2) Put message in
queue
3) Get message
from queue
4) Do
work
5) Delete
message from
queue
27. NoSQL on Azure
• AzureTables service is NoSQL row store
• MongoDB is a document (JSON) store
• Cassandra is a columnar store with excellent replication
• HBase is a Big Data (Hadoop) NoSQL store available in HDInsight
27
MicrosoftAzure
29. Design Goals
Highly Available with Strong Consistency
• Provide access to data in face of failures/partitioning
Durability
• Replicate data several times within and across regions
Scalability
• Need to scale to zettabytes
• Provide a global namespace to access data around the world
• Automatically scale out and load balance data to meet peak traffic demands
• Additional details can be found in the SOSP paper:
• “Windows Azure Storage: A Highly Available Cloud Storage Service with Strong
Consistency”, ACM Symposium on Operating System Principals (SOSP), Oct. 2011
30. Windows Azure Storage Stamps
Storage Stamp
LB
Storage
Location
Service
Access blob storage via the URL: http://<account>.blob.core.windows.net/
Data access
Partition Layer
Front-Ends
DFS Layer
Intra-stamp replication
Storage Stamp
LB
Partition Layer
Front-Ends
DFS Layer
Intra-stamp replication
Inter-stamp (Geo) replication
32. • All writes are appends to the end of a log, which is an append to
the last extent in the log
• Write Consistency across all replicas for an extent:
• Appends are ordered the same across all 3 replicas for
an extent (file)
• Only return success if all 3 replica appends are
committed to storage
• When extent gets to a certain size or on write
failure/LB, seal the extent’s replica set and never
append anymore data to it
• Write Availability:To handle failures during write
• Seal extent’s replica set
• Append immediately to a new extent (replica set) on 3
other available nodes
• Add this new extent to the end of the partition’s log
(stream)
Availability with Consistency forWriting
Partition Layer
33. •Read Consistency: Can read
from any replica, since data in
each replica for an extent is bit-
wise identical
•Read Availability: Send out
parallel read requests if first
read is taking higher than 95%
latency
Availability with Consistency for Reading
Partition Layer
34. • Spreads index/transaction
processing across partition
servers
• Master monitors traffic load/resource
utilization on partition servers
• Dynamically load balance partitions across
servers to achieve better
performance/availability
• Does not move data around,
only reassigns what part of the
index a partition server is
responsible for
Dynamic Load Balancing – Partition Layer
Partition Layer
Index
35. • DFS Read load balancing across
replicas
• Monitor latency/load on each node/replica;
dynamically select what replica to read from and start
additional reads in parallel based on 95% latency
•
•
•
•
•
Dynamic Load Balancing – DFS Layer
Partition Layer
36. Architecture Summary
• Durability: All data stored with at least 3 replicas
• Consistency: All committed data across all 3 replicas are identical
• Availability: Can read from any 3 replicas; If any issues writing seal extent and continue
appending to new extent
• Performance/Scale: Retry based on 95% latencies; Auto scale out and load balance
based on load/capacity
• Additional details can be found in the SOSP paper:
• “Windows Azure Storage: A Highly Available Cloud Storage Service with Strong
Consistency”, ACM Symposium on Operating System Principals (SOSP), Oct. 2011
38. What’s Coming by end of 2013
• Geo-Replication
• Queue Geo-Replication
• Secondary Read-Only Access
• WindowsAzure Import/Export
• Real-Time Metrics for Blobs,Tables and Queues
• CORS for Azure Blobs,Tables and Queues
• JSON for AzureTables
• New .NET 2.1 Library
39. TwoTypes of Durability Offered
Local Redundant Storage Accounts
• Maintain 3 copies of data within a given region
• ~ 2/3 price of Geo Redundant Storage
Geo Redundant Storage Accounts
• Maintain 6 copies of data spread over 2 regions at least 400 miles apart from each other (3 copies are kept
at each region)
40. Geo Redundant Storage
Data geo-replicated across regions 400+ miles apart
• Provide data durability in face of potential major regional disasters
• Provided for Blob,Tables and Queues (NEW)
User chooses primary region during account creation
• Each primary region has a predefined secondary region
Asynchronous geo-replication
• Off critical path of live requests
Europe
West
North
Europe
Geo-replication
South
Central
US
North
Central
US
Geo-replication
East Asia
South
East Asia
Geo-replication
West US East US
Geo-replication
41. East US
West US
Azure
DNS
http://account.blob.core.windows.net/
DNS lookup
Data access
Hostname IP Address
account.blob.core.windows.net West US
Failover
Update DNS
East US
Geo-Rep & Geo-Failover
• Existing URL works after failover
• Failover Trigger – failover would only be used by MS if primary could not be recovered
• Asynchronous Geo-replication – may lose recent updates during failover
• Typically geo-replicate data within minutes, though no SLA for how long it will take
Geo-replication
42. Geo Redundant Storage Roadmap
• Customer Controlled Failover (Future)
• Provide APIs to allow clients to switch the primary and secondary regions for a storage account
• Queue Geo-Replication (Done)
• Secondary Read Only Access (by end of CY13)
43. Secondary Read-Only Access – Scenarios
Read-only access to data even if primary is unavailable
• Access to an eventually consistent copy of the data in the other region
Provides another read source for geographically distributed applications/customers
• Allows lower latency access to data in secondary region
• Have compute at both primary and secondary region and use the storage stored in that region
• For these, the application semantics need to allow for eventually consistent reads
44. Secondary RO Access – How itWorks
Customers using Geo Redundant Storage can opt to have read-only access to the eventually consistent
copy of the data on Secondary tenant
• Customer choose primary region, and the secondary region is fixed
Get two endpoints for accessing your storage account
• Primary endpoint
• accountname.<service>.core.windows.net
• Secondary endpoint
• accountname-secondary.<service>.core.windows.net
Same storage keys work for both endpoints
Consistency
• All Writes go to the Primary
• Reads to Primary are Strongly Consistent
• Reads to Secondary are Eventually Consistent
45. Secondary RO Access – Capabilities
Application will be able to control which location they read data from
• Use one of the two endpoints
• Primary: accountname.<service>.core.windows.net
• Secondary: accountname-secondary.<service>.core.windows.net
• Our client library SDK will provide features for reading from the secondary
• PrimaryOnly, SecondaryOnly, PrimaryThenSecondary, etc
Application will be able to query the current max geo-replication delay for each service
(blob, table, queue) for a storage account
There will be separate storage account metrics for primary and secondary locations
46. Windows Azure Import/Export• MoveTBs of data into and out ofWindows Azure Blobs by shipping disks
Windows
Azure
Storage
49. Import/Export Features
• Accessible via REST with Portal integration
• Each Job imports/exports data for a single storage account
• Each Job can be up to 10 disks
• Support 3.5” SATA HDDs
• All Disks must be encrypted with BitLocker
54. CORS (Cross Origin Resource Sharing)
• What?
• Browser by default prevents scripts from accessing resources from different origin
• CORS is a mechanism that enables cross origin access for scripts
• Set CORS rules via SetServiceProperties for Blobs,Tables and Queues
• Can control the origins that can access resources
• Can control the headers that can be accessed
• Why?
• Do not require running a proxy service for web apps to access storage service
56. JSON (JavaScript Object Notation)• What?
• A popular concise format for REST protocols
• OData supports two formats
•ATOMPub: We currently support this but is too verbose
•JSON: OData has released multiple flavors of JSON
• Why?
• Improves COGS for applications
•Lower bandwidth consumption (approx. 70% savings), lower cpu utilization and hence
better responsiveness
• Many applications use JSON to represent object model
•Efficient object data model to wire protocol
57. 2.1 .NET Library
• New Features
• AsyncTask methods with support for cancellation
• Byte Array,Text, File upload / download APIs for blobs
• IQueryable provider forTables
• Compiled Expressions forTable Entities
• Performance Improvements
• Buffer Pooling
• Multi-Buffer Memory Stream for consistent performance when buffering unknown length data
• .NET MD5 now default (~20% faster than invoking native one)
• Available Soon @ http://www.nuget.org/packages/WindowsAzure.Storage
60. General .NET Best Practices For Azure• Disable Nagle for small messages (< 1400 b)
• ServicePointManager.UseNagleAlgorithm = false;
• Disable Expect 100-Continue*
• ServicePointManager.Expect100Continue = false;
• Increase default connection limit
• ServicePointManager.DefaultConnectionLimit = 100; (Or More)
• Take advantage of .Net 4.5 GC
• GC performance is greatly improved
• Background GC: http://msdn.microsoft.com/en-us/magazine/hh882452.aspx
61. General Best Practices
• Locate Storage accounts close to compute/users
• Understand Account Scalability targets
• Use multiple storage accounts to get more
• Distribute your storage accounts across regions
• Cache critical data sets
• As a Backup data set to fall back on
• To get more request/sec than the account/partition targets
• Distribute load over many partitions and avoid spikes
62. General Best Practices (cont.)
• Use HTTPS
•
Optimize what you send & receive
• Blobs: Range reads, Metadata, Head Requests
• Tables: Upsert, Merge, Projection, Point Queries
• Queues: Update Message, Batch size
• Control Parallelism at the application layer
• Unbounded Parallelism can lead to slow latencies and throttling
63. General Best Practices (cont.)
• Enable Logging & Metrics on each storage service
• Can be done via REST, Client API, or Portal
• Enables clients to self diagnose issues, including performance related ones
• Data can be automatically GC’d according to a user specified retention interval
• For example, have longer retention for hourly metrics and shorter retention for realtime metrics
64. Blob Best Practice
• Try to match your read size with your write size
• Avoid reading small ranges on blobs with large blocks
• CloudBlockBlob.StreamMinimumReadSizeInBytes/ StreamWriteSizeInBytes
• How do I upload a folder the fastest?
• Upload multiple blobs simultaneously
• How do I upload a blob the fastest?
• Use parallel block upload
• Concurrency (C)- Multiple workers upload different blobs
• Parallelism (P) – Multiple workers upload different blocks for same blob
65. ConcurrencyVs. Blob ParallelismXLVM Uploading 512, 256MB Blobs (Total upload size = 128GB)
• C=1, P=1 => Averaged ~ 13. 2 MB/s
• C=1, P=30 => Averaged ~ 50.72 MB/s
• C=30, P=1 => Averaged ~ 96.64 MB/s
• SingleTCP connection is bound byTCP
• rate control & RTT
• P=30 vs. C=30:Test completed almost
• twice as fast!
• Single Blob is bound by the limits of a
• single partition
• Accessing multiple blobs concurrently
• scales
P=1,
C=1
P=30, C
=1 P=1…
0
2000
4000
6000
8000
10000
Time(s)
67. Table Best Practice
• Critical Queries: Select PartitionKey, RowKey to avoid hotspots
• Table Scans are expensive – avoid them at all costs for latency sensitive scenarios
• Batch: Same PartitionKey for entities that need to be updated together
• Schema-less: Store multiple types in same table
• Single Index – {PartitionKey, RowKey}: If needed, concatenate columns to form composite keys
• Entity Locality: {PartitionKey, RowKey} determines sort order
• Store related entites together to reduce IO and improve performance
• Table Service Client Layer in 2.1: Dramatic performance improvements and better NoSQL interface
68. Queue Best Practice• Make message processing idempotent: Messages become visible if client worker
fails to delete message
• Benefit from Update Message: Extend visibility time based on message or save
intermittent state
• Message Count: Use this to scale workers
• Dequeue Count: Use it to identify poison messages or validity of invisibility time
used
• Blobs to store large messages: Increase throughput by having larger batches
• Multiple Queues:To get more than a single queue (partition) target
69. Resources
• Windows Azure Developer Website
• http://www.windowsazure.com/en-us/develop/net/
• Windows Azure Storage Blog
• http://blogs.msdn.com/b/windowsazurestorage/
• SOSP Paper/Talk
• http://blogs.msdn.com/b/windowsazurestorage/archive/2011/11/20/windows-azure-storage-a-highly-
available-cloud-storage-service-with-strong-consistency.aspx
Notes de l'éditeur
Slide Objective
Understand different blob types
Speaker Notes
Block blobs are comprised of blocks, each of which is identified by a block ID.
You create or modify a block blob by uploading a set of blocks and committing them by their block IDs.
If you are uploading a block blob that is no more than 64 MB in size, you can also upload it in its entirety with a single Put Blob operation.
When you upload a block to Microsoft Azure using the Put Block operation, it is associated with the specified block blob, but it does not become part of the blob until you call the Put Block List operation and include the block's ID.
The block remains in an uncommitted state until it is specifically committed. Writing to a block blob is thus always a two-step process.
Each block can be a maximum of 4 MB in size. The maximum size for a block blob in version 2009-09-19 is 200 GB, or up to 50,000 blocks.
Page blobs are a collection of pages.
A page is a range of data that is identified by its offset from the start of the blob.
To create a page blob, you initialize the page blob by calling Put Blob and specifying its maximum size.
To add content to or update a page blob, you call the Put Page operation to modify a page or range of pages by specifying an offset and range. All pages must align 512-byte page boundaries.
Unlike writes to block blobs, writes to page blobs happen in-place and are immediately committed to the blob.
The maximum size for a page blob is 1 TB.
A page written to a page blob may be up to 1 TB in size but will typically be much smaller
Notes
http://msdn.microsoft.com/en-us/library/dd135734.aspx
Block blobs : Adapté au "streaming" de données
Page Blobs : Adapté aux données en lecture/écriture aléatoire
Slide Objectives
Understand the hierarchy of Blob storage
Speaker Notes
Put Blob - Creates a new blob or replaces an existing blob within a container.
Get Blob - Reads or downloads a blob from the system, including its metadata and properties.
Delete Blob - Deletes a blob
Copy Blob - Copies a source blob to a destination blob within the same storage account.
SnapShot Blob - The Snapshot Blob operation creates a read-only snapshot of a blob.
Lease Blob - Establishes an exclusive one-minute write lock on a blob. To write to a locked blob, a client must provide a lease ID.
Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system.
Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs.
The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group.
For example, you can enumerate all blobs organized under MyGroup/.
Notes
The Blob service provides storage for entities, such as binary files and text files. The REST API for the Blob service exposes two resources: containers and blobs. A container is a set of blobs; every blob must belong to a container. The Blob service defines two types of blobs:
Block blobs, which are optimized for streaming. This type of blob is the only blob type available with versions prior to 2009-09-19.
Page blobs, which are optimized for random read/write operations and which provide the ability to write to a range of bytes in a blob. Page blobs are available only with version 2009-09-19.
Containers and blobs support user-defined metadata in the form of name-value pairs specified as headers on a request operation.
Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system. Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs. The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group. For example, you can enumerate all blobs organized under MyGroup/.
A block blob may be created in one of two ways. Block blobs less than or equal to 64 MB in size can be uploaded by calling the Put Blob operation. Block blobs larger than 64 MB must be uploaded as a set of blocks, each of which must be less than or equal to 4 MB in size. A set of successfully uploaded blocks can be assembled in a specified order into a single contiguous blob by calling Put Block List. The maximum size currently supported for a block blob is 200 GB.
Page blobs are created and initialized with a maximum size with a call to Put Blob. To write content to a page blob, you call the Put Page operation. The maximum size currently supported for a page blob is 1 TB.
Blobs support conditional update operations that may be useful for concurrency control and efficient uploading.
Blobs can be read by calling the Get Blob operation. A client may read the entire blob, or an arbitrary range of bytes.
For the Blob service API reference, see Blob Service API.
Slide Objectives
Understand the hierarchy of Blob storage
Speaker Notes
Put Blob - Creates a new blob or replaces an existing blob within a container.
Get Blob - Reads or downloads a blob from the system, including its metadata and properties.
Delete Blob - Deletes a blob
Copy Blob - Copies a source blob to a destination blob within the same storage account.
SnapShot Blob - The Snapshot Blob operation creates a read-only snapshot of a blob.
Lease Blob - Establishes an exclusive one-minute write lock on a blob. To write to a locked blob, a client must provide a lease ID.
Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system.
Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs.
The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group.
For example, you can enumerate all blobs organized under MyGroup/.
Notes
The Blob service provides storage for entities, such as binary files and text files. The REST API for the Blob service exposes two resources: containers and blobs. A container is a set of blobs; every blob must belong to a container. The Blob service defines two types of blobs:
Block blobs, which are optimized for streaming. This type of blob is the only blob type available with versions prior to 2009-09-19.
Page blobs, which are optimized for random read/write operations and which provide the ability to write to a range of bytes in a blob. Page blobs are available only with version 2009-09-19.
Containers and blobs support user-defined metadata in the form of name-value pairs specified as headers on a request operation.
Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system. Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs. The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group. For example, you can enumerate all blobs organized under MyGroup/.
A block blob may be created in one of two ways. Block blobs less than or equal to 64 MB in size can be uploaded by calling the Put Blob operation. Block blobs larger than 64 MB must be uploaded as a set of blocks, each of which must be less than or equal to 4 MB in size. A set of successfully uploaded blocks can be assembled in a specified order into a single contiguous blob by calling Put Block List. The maximum size currently supported for a block blob is 200 GB.
Page blobs are created and initialized with a maximum size with a call to Put Blob. To write content to a page blob, you call the Put Page operation. The maximum size currently supported for a page blob is 1 TB.
Blobs support conditional update operations that may be useful for concurrency control and efficient uploading.
Blobs can be read by calling the Get Blob operation. A client may read the entire blob, or an arbitrary range of bytes.
For the Blob service API reference, see Blob Service API.
Slide Objectives
Understand the hierarchy of Blob storage
Speaker Notes
Put Blob - Creates a new blob or replaces an existing blob within a container.
Get Blob - Reads or downloads a blob from the system, including its metadata and properties.
Delete Blob - Deletes a blob
Copy Blob - Copies a source blob to a destination blob within the same storage account.
SnapShot Blob - The Snapshot Blob operation creates a read-only snapshot of a blob.
Lease Blob - Establishes an exclusive one-minute write lock on a blob. To write to a locked blob, a client must provide a lease ID.
Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system.
Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs.
The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group.
For example, you can enumerate all blobs organized under MyGroup/.
Notes
The Blob service provides storage for entities, such as binary files and text files. The REST API for the Blob service exposes two resources: containers and blobs. A container is a set of blobs; every blob must belong to a container. The Blob service defines two types of blobs:
Block blobs, which are optimized for streaming. This type of blob is the only blob type available with versions prior to 2009-09-19.
Page blobs, which are optimized for random read/write operations and which provide the ability to write to a range of bytes in a blob. Page blobs are available only with version 2009-09-19.
Containers and blobs support user-defined metadata in the form of name-value pairs specified as headers on a request operation.
Using the REST API for the Blob service, developers can create a hierarchical namespace similar to a file system. Blob names may encode a hierarchy by using a configurable path separator. For example, the blob names MyGroup/MyBlob1 and MyGroup/MyBlob2 imply a virtual level of organization for blobs. The enumeration operation for blobs supports traversing the virtual hierarchy in a manner similar to that of a file system, so that you can return a set of blobs that are organized beneath a group. For example, you can enumerate all blobs organized under MyGroup/.
A block blob may be created in one of two ways. Block blobs less than or equal to 64 MB in size can be uploaded by calling the Put Blob operation. Block blobs larger than 64 MB must be uploaded as a set of blocks, each of which must be less than or equal to 4 MB in size. A set of successfully uploaded blocks can be assembled in a specified order into a single contiguous blob by calling Put Block List. The maximum size currently supported for a block blob is 200 GB.
Page blobs are created and initialized with a maximum size with a call to Put Blob. To write content to a page blob, you call the Put Page operation. The maximum size currently supported for a page blob is 1 TB.
Blobs support conditional update operations that may be useful for concurrency control and efficient uploading.
Blobs can be read by calling the Get Blob operation. A client may read the entire blob, or an arbitrary range of bytes.
For the Blob service API reference, see Blob Service API.
Slide Objective
Understand basics of listing blobs in a container
Speaker Notes
The List Blobs operation enumerates the list of blobs under the specified container.
Can include uncommitted Blobs- see discussion on Blocks and Block Lists
Can include snapshots
Notes
http://msdn.microsoft.com/en-us/library/dd135734.aspx
Slide Objective
Understand pagination when listing blobs
Speaker Notes
Reponses over multiple pages return a marker value
This marker is sent to get subsequent page
Notes
http://msdn.microsoft.com/en-us/library/dd135734.aspx