This document provides an overview of integrating SQL Server Reporting Services (SSRS) with SharePoint. It begins with a brief introduction of the author and their background. It then discusses the benefits of integrating SSRS with SharePoint, including running reports within SharePoint and automatically delivering reports. The bulk of the document outlines 13 tips for the integration process, covering topics like installing and configuring SSRS and SharePoint, deploying reports, and creating subscriptions. It also discusses using SSRS reports with PerformancePoint Services dashboards and against SharePoint lists.
More and more organizations are moving their ETL workloads to a Hadoop based ELT grid architecture. Hadoop`s inherit capabilities, especially it`s ability to do late binding addresses some of the key challenges with traditional ETL platforms. In this presentation, attendees will learn the key factors, considerations and lessons around ETL for Hadoop. Areas such as pros and cons for different extract and load strategies, best ways to batch data, buffering and compression considerations, leveraging HCatalog, data transformation, integration with existing data transformations, advantages of different ways of exchanging data and leveraging Hadoop as a data integration layer. This is an extremely popular presentation around ETL and Hadoop.
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. At this scale, output committers that create extra copies or can’t handle task failures are no longer practical. This talk will explain the problems that are caused by the available committers when writing to S3, and show how Netflix solved the committer problem.
In this session, you’ll learn:
– Some background about Spark at Netflix
– About output committers, and how both Spark and Hadoop handle failures
– How HDFS and S3 differ, and why HDFS committers don’t work well
– A new output committer that uses the S3 multi-part upload API
– How you can use this new committer in your Spark applications to avoid duplicating data
This document summarizes a presentation on optimizing Zabbix performance through tuning. It discusses identifying and fixing common problems like default templates and database settings. Next, it covers tuning Zabbix configuration by adjusting the number of server processes and monitoring internal stats. Additional optimizations include using proxies to distribute load, partitioning historical tables, and running Zabbix components on separate hardware. The summary emphasizes monitoring internal stats, tuning configurations and databases, disabling housekeeping, and reviewing additional reading on tuning MySQL, PostgreSQL and Zabbix internals.
The session will be a deep dive introduction to Snowflake that includes Snowflake architecture, Virtual Warehouses, Designing a real use case, Loading data into Snowflake from a Data Lake.
Beyond REST? Building data services with XMPPKellan
The document discusses how REST works well for everyday problems but breaks down at scale for large social websites with rich APIs and low latency requirements. It introduces XMPP PubSub as an alternative to REST that may be better suited to building large, real-time data services by providing lower latency, flexibility for social network effects, and pub/sub capabilities beyond REST's request/response model. The authors are excited about XMPP PubSub but are not experts, and their specialty is constructing big social sites rather than instant messaging or chat applications.
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergAnant Corporation
In this talk, Dremio Developer Advocate, Alex Merced, discusses strategies for migrating your existing data over to Apache Iceberg. He'll go over the following:
How to Migrate Hive, Delta Lake, JSON, and CSV sources to Apache Iceberg
Pros and Cons of an In-place or Shadow Migration
Migrating between Apache Iceberg catalogs Hive/Glue -- Arctic/Nessie
Transactional operations in Apache Hive: present and futureDataWorks Summit
Apache Hive is an enterprise data warehouse build on top of Hadoop. Hive supports insert, update, delete, and merge SQL operations with transactional semantics and read operations that run at snapshot isolation. The well defined semantics of these operations in the face of failure and concurrency are critical to building robust application on top of Apache Hive. In the past there were many preconditions to enabling these features which meant giving up other functionality. The need to make these tradeoffs is rapidly being eliminated.
This talk will describe the intended use cases, architecture of the implementation, recent improvements and new features build for Hive 3.0. For example, bucketing transactional tables, while supported, is no longer required. Performance overhead of using transactional tables is nearly eliminated relative to identical non-transactional tables. We’ll also cover Streaming Ingest API, which allows writing batches of events into a Hive table without using SQL.
Speaker
Eugene Koifman, Hortonworks, Principal Software Engineer
More and more organizations are moving their ETL workloads to a Hadoop based ELT grid architecture. Hadoop`s inherit capabilities, especially it`s ability to do late binding addresses some of the key challenges with traditional ETL platforms. In this presentation, attendees will learn the key factors, considerations and lessons around ETL for Hadoop. Areas such as pros and cons for different extract and load strategies, best ways to batch data, buffering and compression considerations, leveraging HCatalog, data transformation, integration with existing data transformations, advantages of different ways of exchanging data and leveraging Hadoop as a data integration layer. This is an extremely popular presentation around ETL and Hadoop.
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. At this scale, output committers that create extra copies or can’t handle task failures are no longer practical. This talk will explain the problems that are caused by the available committers when writing to S3, and show how Netflix solved the committer problem.
In this session, you’ll learn:
– Some background about Spark at Netflix
– About output committers, and how both Spark and Hadoop handle failures
– How HDFS and S3 differ, and why HDFS committers don’t work well
– A new output committer that uses the S3 multi-part upload API
– How you can use this new committer in your Spark applications to avoid duplicating data
This document summarizes a presentation on optimizing Zabbix performance through tuning. It discusses identifying and fixing common problems like default templates and database settings. Next, it covers tuning Zabbix configuration by adjusting the number of server processes and monitoring internal stats. Additional optimizations include using proxies to distribute load, partitioning historical tables, and running Zabbix components on separate hardware. The summary emphasizes monitoring internal stats, tuning configurations and databases, disabling housekeeping, and reviewing additional reading on tuning MySQL, PostgreSQL and Zabbix internals.
The session will be a deep dive introduction to Snowflake that includes Snowflake architecture, Virtual Warehouses, Designing a real use case, Loading data into Snowflake from a Data Lake.
Beyond REST? Building data services with XMPPKellan
The document discusses how REST works well for everyday problems but breaks down at scale for large social websites with rich APIs and low latency requirements. It introduces XMPP PubSub as an alternative to REST that may be better suited to building large, real-time data services by providing lower latency, flexibility for social network effects, and pub/sub capabilities beyond REST's request/response model. The authors are excited about XMPP PubSub but are not experts, and their specialty is constructing big social sites rather than instant messaging or chat applications.
Data Engineer's Lunch #83: Strategies for Migration to Apache IcebergAnant Corporation
In this talk, Dremio Developer Advocate, Alex Merced, discusses strategies for migrating your existing data over to Apache Iceberg. He'll go over the following:
How to Migrate Hive, Delta Lake, JSON, and CSV sources to Apache Iceberg
Pros and Cons of an In-place or Shadow Migration
Migrating between Apache Iceberg catalogs Hive/Glue -- Arctic/Nessie
Transactional operations in Apache Hive: present and futureDataWorks Summit
Apache Hive is an enterprise data warehouse build on top of Hadoop. Hive supports insert, update, delete, and merge SQL operations with transactional semantics and read operations that run at snapshot isolation. The well defined semantics of these operations in the face of failure and concurrency are critical to building robust application on top of Apache Hive. In the past there were many preconditions to enabling these features which meant giving up other functionality. The need to make these tradeoffs is rapidly being eliminated.
This talk will describe the intended use cases, architecture of the implementation, recent improvements and new features build for Hive 3.0. For example, bucketing transactional tables, while supported, is no longer required. Performance overhead of using transactional tables is nearly eliminated relative to identical non-transactional tables. We’ll also cover Streaming Ingest API, which allows writing batches of events into a Hive table without using SQL.
Speaker
Eugene Koifman, Hortonworks, Principal Software Engineer
dbt Python models - GoDataFest by Guillermo SanchezGoDataDriven
Guillermo Sanchez presented on the pros and cons of using Python models in dbt. While Python models allow for more advanced analytics and leveraging the Python ecosystem, they also introduce more complexity in setup and divergent APIs across platforms. Additionally, dbt may not be well-suited for certain use cases like ingesting external data or building full MLOps pipelines. In general, Python models are best for the right analytical use cases, but caution is needed, especially for production environments.
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
This document provides an overview and summary of Amazon S3 best practices and tuning for Hadoop/Spark in the cloud. It discusses the relationship between Hadoop/Spark and S3, the differences between HDFS and S3 and their use cases, details on how S3 behaves from the perspective of Hadoop/Spark, well-known pitfalls and tunings related to S3 consistency and multipart uploads, and recent community activities related to S3. The presentation aims to help users optimize their use of S3 storage with Hadoop/Spark frameworks.
Choose Your Weapon: Comparing Spark on FPGAs vs GPUsDatabricks
Today, general-purpose CPU clusters are the most widely used environment for data analytics workloads. Recently, acceleration solutions employing field-programmable hardware have emerged providing cost, performance and power consumption advantages. Field programmable gate arrays (FPGAs) and graphics processing units (GPUs) are two leading technologies being applied. GPUs are well-known for high-performance dense-matrix, highly regular operations such as graphics processing and matrix manipulation. FPGAs are flexible in terms of programming architecture and are adept at providing performance for operations that contain conditionals and/or branches. These architectural differences have significant performance impacts, which manifest all the way up to the application layer. It is therefore critical that data scientists and engineers understand these impacts in order to inform decisions about if and how to accelerate.
This talk will characterize the architectural aspects of the two hardware types as applied to analytics, with the ultimate goal of informing the application programmer. Recently, both GPUs and FPGAs have been applied to Apache SparkSQL, via services on Amazon Web Services (AWS) cloud. These solutions’ goal is providing Spark users high performance and cost savings. We first characterize the key aspects of the two hardware platforms. Based on this characterization, we examine and contrast the sets and types of SparkSQL operations they accelerate well, how they accelerate them, and the implications for the user’s application. Finally, we present and analyze a performance comparison of the two AWS solutions (one FPGA-based, one GPU-based). The tests employ the TPC-DS (decision support) benchmark suite, a widely used performance test for data analytics.
Improving Apache Spark for Dynamic Allocation and Spot InstancesDatabricks
This presentation will explore the new work in Spark 3.1 adding the concept of graceful decommissioning and how we can use this to improve Spark’s performance in both dynamic allocation and spot/preemptable instances. Together we’ll explore how Spark’s dynamic allocation has evolved over time, and why the different changes have been needed. We’ll also look at the multi-company collaboration that resulted in being able to deliver this feature and I’ll end with encouraging pointers on how to get more involved in Spark’s development.
NetApp Se training storage grid webscale technical overviewsolarisyougood
The document provides an overview of StorageGRID Webscale, an object storage platform from NetApp. It discusses key concepts such as object storage, metadata management, and StorageGRID's dynamic policy engine. The policy engine uses metadata and user-defined rules to intelligently place and manage objects across multiple sites, storage tiers, and protocols (e.g. S3) over their lifecycle. This allows building complex data management policies without impacting performance or capacity.
- The document provides an overview of SQL Server 2019 Master Data Service (MDS), including what MDS is, its components, how to develop MDS models, integrate MDS with other systems, and administer MDS.
- Key topics covered include MDS architecture, the MDS repository, model development features like entities, attributes, hierarchies and business rules, management features like versioning and changesets, and integration methods like staging tables and views.
- The session aims to explain what MDS is, how to install, configure and use MDS for projects, but does not cover programming with master data, high availability, or migration from prior versions.
Storage Requirements and Options for Running Spark on KubernetesDataWorks Summit
In a world of serverless computing users tend to be frugal when it comes to expenditure on compute, storage and other resources. Paying for the same when they aren’t in use becomes a significant factor. Offering Spark as service on cloud presents very unique challenges. Running Spark on Kubernetes presents a lot of challenges especially around storage and persistence. Spark workloads have very unique requirements of Storage for intermediate data, long time persistence, Share file system and requirements become very tight when it same need to be offered as a service for enterprise to mange GDPR and other compliance like ISO 27001 and HIPAA certifications.
This talk covers challenges involved in providing Serverless Spark Clusters share the specific issues one can encounter when running large Kubernetes clusters in production especially covering the scenarios related to persistence.
This talk will help people using Kubernetes or docker runtime in production and help them understand various storage options available and which is more suitable for running Spark workloads on Kubernetes and what more can be done
Apache Pinot Case Study: Building Distributed Analytics Systems Using Apache ...HostedbyConfluent
The document describes Apache Pinot, an open source distributed real-time analytics platform used at LinkedIn. It discusses the challenges of building user-facing real-time analytics systems at scale. It initially describes LinkedIn's use of Apache Kafka for ingestion and Apache Pinot for queries, but notes challenges with Pinot's initial Kafka consumer group-based approach for real-time ingestion, such as incorrect results, limited scalability, and high storage overhead. It then presents Pinot's new partition-level consumption approach which addresses these issues by taking control of partition assignment and checkpointing, allowing for independent and flexible scaling of individual partitions across servers.
Snowflake: The most cost-effective agile and scalable data warehouse ever!Visual_BI
In this webinar, the presenter will take you through the most revolutionary data warehouse, Snowflake with a live demo and technical and functional discussions with a customer. Ryan Goltz from Chesapeake Energy and Tristan Handy, creator of DBT Cloud and owner of Fishtown Analytics will also be joining the webinar.
Bi 4.0 Migration Strategy and Best PracticesEric Molner
The document discusses best practices for migrating to SAP BusinessObjects BI 4.0 including conducting an assessment of the current BI environment and existing content, developing a migration plan and roadmap, installing and configuring the new BI 4.0 software, migrating and converting existing reports and content where needed, testing the new system, and providing training to users. It also outlines Analytics8's methodology for assisting with the various phases of a BI migration project from assessment to deployment.
Deploying Schemas and XMetaL Customization FilesXMetaL
Derek Read discusses how XMetaL locates schemas, customization files, and other resources needed for document editing. It first looks in the same folder as the schema, then in default installation folders, and will auto-generate files if they are not found. Complex customizations may involve multiple files packaged together, such as in a XAC file. The XMetaL Developer tool assists with debugging the lookup process. Questions about specific CMS integrations should be directed to the CMS provider.
O Zabbix é um software gratuito e de código aberto para monitoramento de rede e aplicações. Ele possui servidor, interface web, agentes e proxies para coletar dados de hosts monitorados. O Zabbix permite configurar itens, triggers, mapas e notificações para gerenciar eventos e analisar históricos de desempenho. Suporta diversas plataformas e aplicações.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
This document discusses end-to-end data governance with Apache Avro and Apache Atlas at Comcast. It outlines how Comcast uses Avro for schema governance and Apache Atlas for data governance, including metadata browsing, schema registry, and tracking data lineage. Comcast has extended Atlas with new types for Avro schemas and customizations to better handle their hybrid environment and integrate platforms for comprehensive data governance.
This document discusses Spark security and provides an overview of authentication, authorization, encryption, and auditing in Spark. It describes how Spark leverages Kerberos for authentication and uses services like Ranger and Sentry for authorization. It also outlines how communication channels in Spark are encrypted and some common issues to watch out for related to Spark security.
ORC files were originally introduced in Hive, but have now migrated to an independent Apache project. This has sped up the development of ORC and simplified integrating ORC into other projects, such as Hadoop, Spark, Presto, and Nifi. There are also many new tools that are built on top of ORC, such as Hive’s ACID transactions and LLAP, which provides incredibly fast reads for your hot data. LLAP also provides strong security guarantees that allow each user to only see the rows and columns that they have permission for.
This talk will discuss the details of the ORC and Parquet formats and what the relevant tradeoffs are. In particular, it will discuss how to format your data and the options to use to maximize your read performance. In particular, we’ll discuss when and how to use ORC’s schema evolution, bloom filters, and predicate push down. It will also show you how to use the tools to translate ORC files into human-readable formats, such as JSON, and display the rich metadata from the file including the type in the file and min, max, and count for each column.
The document discusses transparent data encryption in PostgreSQL. It describes threats to unencrypted database servers like privilege abuse and SQL injections. It then covers using buffer-level encryption in PostgreSQL to encrypt data in shared memory and at rest on disk. This provides encryption with less performance overhead than per-query encryption. The document proposes encrypting WAL files, system catalogs, and temporary files in addition to table data for stronger security. It also discusses key management with a two-tier architecture involving master and tablespace keys.
Tez Shuffle Handler: Shuffling at Scale with Apache HadoopDataWorks Summit
In this talk we introduce a new Shuffle Handler for Tez, a YARN Auxiliary Service, that addresses the shortcomings and performance bottlenecks of the legacy MapReduce Shuffle Handler, the default shuffle service in Apache Tez. Based on our experiences of running Apache Pig and *Hive at scale on Apache Tez at Yahoo!, advanced features like auto-parallelism and session mode expose specific limitations in the shuffle service which was not designed with these features in mind.
A highly auto-reduced job suffers from longer fetch times as the number of fetches per downstream task increases by the auto-reduction factor. The Apache Tez Shuffle Handler adds composite fetch which has support for multi-partition fetch to mitigate this performance slow down.
Also, since Apache Tez DAGs are run completely within a single application unlike their equivalent MapReduce jobs, intermediate shuffle data in Tez can linger beyond its usefulness. The Apache Tez Shuffle Handler provides deletion APIs to reduce disk usage for such long running Tez sessions.
As an emerging technology we will outline future roadmap for the Apache Tez Shuffle Handler and provide performance evaluation results from real world jobs at scale.
Microsoft SQL Server Reporting Services (SSRS) allows users to create, manage, and view reports. It includes components like data sources, datasets, reports, output formats, delivery targets, and a metadata database. Reports can be authored in Visual Studio and deployed to a report server. When executed, the report server retrieves data from the data source, processes the report definition, and delivers outputs like HTML, Excel, PDF etc. to users on demand or via scheduled subscriptions.
The document describes various expressions in SQL Server Reporting Services (SSRS) that can format dates and numbers, perform calculations on dates, look up values from other datasets, and conditionally format text based on field values. Expressions allow retrieving and manipulating data, performing conditional logic, and formatting output in SSRS reports. Common functions used include FORMAT, DATEADD, IIF, LOOKUP, and JOIN.
dbt Python models - GoDataFest by Guillermo SanchezGoDataDriven
Guillermo Sanchez presented on the pros and cons of using Python models in dbt. While Python models allow for more advanced analytics and leveraging the Python ecosystem, they also introduce more complexity in setup and divergent APIs across platforms. Additionally, dbt may not be well-suited for certain use cases like ingesting external data or building full MLOps pipelines. In general, Python models are best for the right analytical use cases, but caution is needed, especially for production environments.
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
This document provides an overview and summary of Amazon S3 best practices and tuning for Hadoop/Spark in the cloud. It discusses the relationship between Hadoop/Spark and S3, the differences between HDFS and S3 and their use cases, details on how S3 behaves from the perspective of Hadoop/Spark, well-known pitfalls and tunings related to S3 consistency and multipart uploads, and recent community activities related to S3. The presentation aims to help users optimize their use of S3 storage with Hadoop/Spark frameworks.
Choose Your Weapon: Comparing Spark on FPGAs vs GPUsDatabricks
Today, general-purpose CPU clusters are the most widely used environment for data analytics workloads. Recently, acceleration solutions employing field-programmable hardware have emerged providing cost, performance and power consumption advantages. Field programmable gate arrays (FPGAs) and graphics processing units (GPUs) are two leading technologies being applied. GPUs are well-known for high-performance dense-matrix, highly regular operations such as graphics processing and matrix manipulation. FPGAs are flexible in terms of programming architecture and are adept at providing performance for operations that contain conditionals and/or branches. These architectural differences have significant performance impacts, which manifest all the way up to the application layer. It is therefore critical that data scientists and engineers understand these impacts in order to inform decisions about if and how to accelerate.
This talk will characterize the architectural aspects of the two hardware types as applied to analytics, with the ultimate goal of informing the application programmer. Recently, both GPUs and FPGAs have been applied to Apache SparkSQL, via services on Amazon Web Services (AWS) cloud. These solutions’ goal is providing Spark users high performance and cost savings. We first characterize the key aspects of the two hardware platforms. Based on this characterization, we examine and contrast the sets and types of SparkSQL operations they accelerate well, how they accelerate them, and the implications for the user’s application. Finally, we present and analyze a performance comparison of the two AWS solutions (one FPGA-based, one GPU-based). The tests employ the TPC-DS (decision support) benchmark suite, a widely used performance test for data analytics.
Improving Apache Spark for Dynamic Allocation and Spot InstancesDatabricks
This presentation will explore the new work in Spark 3.1 adding the concept of graceful decommissioning and how we can use this to improve Spark’s performance in both dynamic allocation and spot/preemptable instances. Together we’ll explore how Spark’s dynamic allocation has evolved over time, and why the different changes have been needed. We’ll also look at the multi-company collaboration that resulted in being able to deliver this feature and I’ll end with encouraging pointers on how to get more involved in Spark’s development.
NetApp Se training storage grid webscale technical overviewsolarisyougood
The document provides an overview of StorageGRID Webscale, an object storage platform from NetApp. It discusses key concepts such as object storage, metadata management, and StorageGRID's dynamic policy engine. The policy engine uses metadata and user-defined rules to intelligently place and manage objects across multiple sites, storage tiers, and protocols (e.g. S3) over their lifecycle. This allows building complex data management policies without impacting performance or capacity.
- The document provides an overview of SQL Server 2019 Master Data Service (MDS), including what MDS is, its components, how to develop MDS models, integrate MDS with other systems, and administer MDS.
- Key topics covered include MDS architecture, the MDS repository, model development features like entities, attributes, hierarchies and business rules, management features like versioning and changesets, and integration methods like staging tables and views.
- The session aims to explain what MDS is, how to install, configure and use MDS for projects, but does not cover programming with master data, high availability, or migration from prior versions.
Storage Requirements and Options for Running Spark on KubernetesDataWorks Summit
In a world of serverless computing users tend to be frugal when it comes to expenditure on compute, storage and other resources. Paying for the same when they aren’t in use becomes a significant factor. Offering Spark as service on cloud presents very unique challenges. Running Spark on Kubernetes presents a lot of challenges especially around storage and persistence. Spark workloads have very unique requirements of Storage for intermediate data, long time persistence, Share file system and requirements become very tight when it same need to be offered as a service for enterprise to mange GDPR and other compliance like ISO 27001 and HIPAA certifications.
This talk covers challenges involved in providing Serverless Spark Clusters share the specific issues one can encounter when running large Kubernetes clusters in production especially covering the scenarios related to persistence.
This talk will help people using Kubernetes or docker runtime in production and help them understand various storage options available and which is more suitable for running Spark workloads on Kubernetes and what more can be done
Apache Pinot Case Study: Building Distributed Analytics Systems Using Apache ...HostedbyConfluent
The document describes Apache Pinot, an open source distributed real-time analytics platform used at LinkedIn. It discusses the challenges of building user-facing real-time analytics systems at scale. It initially describes LinkedIn's use of Apache Kafka for ingestion and Apache Pinot for queries, but notes challenges with Pinot's initial Kafka consumer group-based approach for real-time ingestion, such as incorrect results, limited scalability, and high storage overhead. It then presents Pinot's new partition-level consumption approach which addresses these issues by taking control of partition assignment and checkpointing, allowing for independent and flexible scaling of individual partitions across servers.
Snowflake: The most cost-effective agile and scalable data warehouse ever!Visual_BI
In this webinar, the presenter will take you through the most revolutionary data warehouse, Snowflake with a live demo and technical and functional discussions with a customer. Ryan Goltz from Chesapeake Energy and Tristan Handy, creator of DBT Cloud and owner of Fishtown Analytics will also be joining the webinar.
Bi 4.0 Migration Strategy and Best PracticesEric Molner
The document discusses best practices for migrating to SAP BusinessObjects BI 4.0 including conducting an assessment of the current BI environment and existing content, developing a migration plan and roadmap, installing and configuring the new BI 4.0 software, migrating and converting existing reports and content where needed, testing the new system, and providing training to users. It also outlines Analytics8's methodology for assisting with the various phases of a BI migration project from assessment to deployment.
Deploying Schemas and XMetaL Customization FilesXMetaL
Derek Read discusses how XMetaL locates schemas, customization files, and other resources needed for document editing. It first looks in the same folder as the schema, then in default installation folders, and will auto-generate files if they are not found. Complex customizations may involve multiple files packaged together, such as in a XAC file. The XMetaL Developer tool assists with debugging the lookup process. Questions about specific CMS integrations should be directed to the CMS provider.
O Zabbix é um software gratuito e de código aberto para monitoramento de rede e aplicações. Ele possui servidor, interface web, agentes e proxies para coletar dados de hosts monitorados. O Zabbix permite configurar itens, triggers, mapas e notificações para gerenciar eventos e analisar históricos de desempenho. Suporta diversas plataformas e aplicações.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
This document discusses end-to-end data governance with Apache Avro and Apache Atlas at Comcast. It outlines how Comcast uses Avro for schema governance and Apache Atlas for data governance, including metadata browsing, schema registry, and tracking data lineage. Comcast has extended Atlas with new types for Avro schemas and customizations to better handle their hybrid environment and integrate platforms for comprehensive data governance.
This document discusses Spark security and provides an overview of authentication, authorization, encryption, and auditing in Spark. It describes how Spark leverages Kerberos for authentication and uses services like Ranger and Sentry for authorization. It also outlines how communication channels in Spark are encrypted and some common issues to watch out for related to Spark security.
ORC files were originally introduced in Hive, but have now migrated to an independent Apache project. This has sped up the development of ORC and simplified integrating ORC into other projects, such as Hadoop, Spark, Presto, and Nifi. There are also many new tools that are built on top of ORC, such as Hive’s ACID transactions and LLAP, which provides incredibly fast reads for your hot data. LLAP also provides strong security guarantees that allow each user to only see the rows and columns that they have permission for.
This talk will discuss the details of the ORC and Parquet formats and what the relevant tradeoffs are. In particular, it will discuss how to format your data and the options to use to maximize your read performance. In particular, we’ll discuss when and how to use ORC’s schema evolution, bloom filters, and predicate push down. It will also show you how to use the tools to translate ORC files into human-readable formats, such as JSON, and display the rich metadata from the file including the type in the file and min, max, and count for each column.
The document discusses transparent data encryption in PostgreSQL. It describes threats to unencrypted database servers like privilege abuse and SQL injections. It then covers using buffer-level encryption in PostgreSQL to encrypt data in shared memory and at rest on disk. This provides encryption with less performance overhead than per-query encryption. The document proposes encrypting WAL files, system catalogs, and temporary files in addition to table data for stronger security. It also discusses key management with a two-tier architecture involving master and tablespace keys.
Tez Shuffle Handler: Shuffling at Scale with Apache HadoopDataWorks Summit
In this talk we introduce a new Shuffle Handler for Tez, a YARN Auxiliary Service, that addresses the shortcomings and performance bottlenecks of the legacy MapReduce Shuffle Handler, the default shuffle service in Apache Tez. Based on our experiences of running Apache Pig and *Hive at scale on Apache Tez at Yahoo!, advanced features like auto-parallelism and session mode expose specific limitations in the shuffle service which was not designed with these features in mind.
A highly auto-reduced job suffers from longer fetch times as the number of fetches per downstream task increases by the auto-reduction factor. The Apache Tez Shuffle Handler adds composite fetch which has support for multi-partition fetch to mitigate this performance slow down.
Also, since Apache Tez DAGs are run completely within a single application unlike their equivalent MapReduce jobs, intermediate shuffle data in Tez can linger beyond its usefulness. The Apache Tez Shuffle Handler provides deletion APIs to reduce disk usage for such long running Tez sessions.
As an emerging technology we will outline future roadmap for the Apache Tez Shuffle Handler and provide performance evaluation results from real world jobs at scale.
Microsoft SQL Server Reporting Services (SSRS) allows users to create, manage, and view reports. It includes components like data sources, datasets, reports, output formats, delivery targets, and a metadata database. Reports can be authored in Visual Studio and deployed to a report server. When executed, the report server retrieves data from the data source, processes the report definition, and delivers outputs like HTML, Excel, PDF etc. to users on demand or via scheduled subscriptions.
The document describes various expressions in SQL Server Reporting Services (SSRS) that can format dates and numbers, perform calculations on dates, look up values from other datasets, and conditionally format text based on field values. Expressions allow retrieving and manipulating data, performing conditional logic, and formatting output in SSRS reports. Common functions used include FORMAT, DATEADD, IIF, LOOKUP, and JOIN.
The document provides an overview of Office 365 administration. It describes the key components of Office 365 including Exchange, Lync, and SharePoint. It also summarizes the main functions and interface of the Office 365 Admin Portal, including managing users, licenses, and service settings. The document outlines how to add, edit, and manage multiple users from within the Admin Portal.
By now you may have heard that JavaScript is becoming a viable solution for SharePoint Development, but where do you get started? This session will start with some of the basics and introduce attendees to a few different Javascript libraries such as jQuery, Knockout, Bootstrap, etc. It will showcase SharePoint's REST API and provide some examples of how to conduct basic CRUD operations which you can repurpose for your own custom SharePoint Apps.
Groups are a new feature in Office 365 for communication and collaboration
›Groups are not based in SharePoint Online or Exchange Online etc. Groups are an Office 365 feature that use several components from SharePoint Online, Exchange Online, Azure and more…
›Groups are a loose coupling of independent services
SPS St Louis - SSRS 2012 SharePoint 2013 List ReportingPatrick Tucker
This document provides an overview of using SQL Server Reporting Services (SSRS) to create reports in SharePoint. It discusses the software requirements, how to configure the required SharePoint service applications, enabling reporting features, and creating a report library. It also covers building reports using Report Builder, adding data sources, parameters and filters, and report parts. The document includes demos and discusses permissions, metadata, managing reports, and common issues. References and contact information are also provided.
Configuring SharePoint 2013 for BI scenariosSPC Adriatics
Configuring SharePoint 2013 for BI is not just clicking next in the configuration wizard but it needs some special attention with configuring service applications and of course we cannot forget about configuring Kerberos delegation.
We take a look at configuring PerformancePoint, PowerPivot, Reporting Services in SharePoint integrated mode and everything you need to know to successfully configure BI services.
This document summarizes how to integrate SQL Server Reporting Services (SSRS) with SharePoint. It discusses the benefits of integration such as a unified interface for reports and other documents. It also provides an overview of the installation and configuration steps, including installing SSRS and SharePoint, registering SSRS in SharePoint, creating SSRS service applications, and activating features in site collections. The document concludes with information on creating and publishing reports within the SharePoint interface.
SQL Server Reporting Services (SSRS) is a reporting tool that allows users to create and view reports. It includes components like the Report Server, Report Builder, and Report Manager. When a user requests a report, the SSRS server retrieves data from sources, merges it with the report definition, and returns the generated report to the client. Reports in SSRS can be designed using Visual Studio and include things like datasets, parameters, charts, and expressions. Security and permissions are managed through roles and role assignments that control access to report content.
- SSRS with SQL Server 2008 R2 introduced several new features including improved data visualization, enhanced report design tools, support for additional data sources, and better collaboration and sharing capabilities.
- The user interface in Report Manager and Report Builder was simplified and enhanced with features like an AJAX report viewer and support for rotating text and additional chart types.
- Developers have additional tools for building reports including new lookup functions, calculating aggregates of aggregates, and publishing report parts to a gallery. Data can now be retrieved from SQL Azure, Parallel Data Warehouse, and SharePoint lists.
- Shared datasets and cache refresh plans improve performance and maintainability. Reports can also be rendered as Atom feeds for consumption in tools like Excel 2010
This document discusses several reports created using SQL Server Reporting Services (SSRS) and deployed to SharePoint. It describes employee job reports showing total hours and costs by week with parameters for date range. It also covers overhead category reports showing spending by fiscal quarter with previous quarter comparisons. Additional reports discussed include revenue reports with 12-month moving averages, returns key performance indicator reports, sales by region reports, and pie charts showing revenue by region. The document provides examples of report layouts, data sources using multidimensional expressions (MDX), parameters, and previews within SSRS and as deployed to SharePoint.
Denny Lee\'s Data Camp v1.0 talk on SSRS Best Practices for ITBala Subra
Building and Deploying Large Scale SQL Server Reporting Services Environments Technical Note:
* Report Catalog sizing
* The benefits of File System snapshots for SSRS 2005
* Why File System snapshots may not help for SSRS 2008
* Using Cache Execution
* Load Balancing your Network
* Isolate your workloads
* Report Data Performance Considerations
Agile Methodology Approach to SSRS Reporting. How to utilize principles from Agile project management process and utilize it for creating better SSRS reports.
Sql Server 2012 Reporting-Services is Now a SharePoint Service ApplicationInnoTech
Reporting Services in SQL Server 2012 is now configured as a SharePoint 2010 service application:
- Reporting Services (RS) is hosted in the SharePoint 2010 shared service application pool. The RS catalog databases are managed as SharePoint service application databases.
- Administration of RS is now through the SharePoint Central Administration user interface, including configuration, monitoring and management of the RS service application.
- Integration with SharePoint provides improved communication, authentication, deployment and a more unified administration experience for RS compared to previous versions.
This document summarizes a presentation about advanced reporting techniques and managing reports in SQL Server Reporting Services (SSRS). The presentation covers SSRS architecture, linked reports, subscriptions, the Report Manager overview, snapshots and comparisons, report history, overriding the report server database, user and group security, the Report Builder, and demos. The goal is to help attendees better understand editing reports, managing reports, and security in SSRS.
SPS Virginia Beach - SSRS 2012 and SharePoint 2010 ReportingPatrick Tucker
The document discusses reporting on SharePoint 2010 lists using SQL Server Reporting Services (SSRS) 2012. It provides an overview of the options for building reports on SharePoint data, including using PowerPivot or SSRS. It also covers the necessary software, steps for setting up SSRS in SharePoint integrated mode, using Report Builder to create reports, displaying and filtering data, and some considerations for managing reports.
SPCSEA 2013 - Upgrading to SharePoint 2013Michael Noel
This document provides an overview of best practices for upgrading to SharePoint 2013. It discusses upgrade fundamentals including requirements, supported version upgrades, and the Microsoft versus third-party approaches. It covers pre-upgrade tasks like claims migration and content database testing. Detailed steps are provided for upgrading service applications like the managed metadata service and user profile sync. Post-upgrade health checks and visual changes are also addressed. The presentation emphasizes testing upgrades, addressing issues proactively, and proceeding cautiously.
Slides from a presentation I did demonstrating the new features of SharePoint 2013 as well as a simple App I created which talks to a service on Windows Azure.
Accompanying article is at: http://www.shailensukul.com/2012/10/sharepoint-2013-swordfish-app.html
SharePoint Saturday Belgium 2014 - Best Practices for Configuring the ShareP...BIWUG
This document provides guidance on best practices for configuring the SharePoint 2013 BI stack, including:
1) Planning server topology, naming conventions, and service accounts before installation.
2) Configuring core services like Excel Services, PowerPivot, SSRS, and PerformancePoint Services.
3) Addressing common issues and testing configurations.
4) Discussing next steps like upgrading SQL Server and enabling Kerberos authentication.
The document provides an overview of upgrading from SharePoint 2010 to SharePoint 2013. It discusses the supported upgrade paths and that an in-place upgrade is not supported. It covers pre-upgrade tasks like assessing service applications to upgrade, testing the upgrade process, and performing a claims migration. The document then details the process for upgrading content databases, service applications like managed metadata and user profiles, and post-upgrade health checks.
SPCA2013 - Upgrade to SharePoint 2013 - A Cautioned ApproachNCCOMMS
The document provides guidance on upgrading from SharePoint 2010 to SharePoint 2013. It notes that an in-place upgrade is not supported and a database attach is the only supported upgrade method. It discusses assessing which service application databases should be upgraded versus recreated. The document outlines the major steps for the upgrade process including preparing for the upgrade, claims migration, content database upgrade, and upgrading specific service applications like the managed metadata service and user profile service.
NZSPC 2013 - Upgrading to SharePoint 2013Michael Noel
Michael Noel is the author of 19 technical books on enterprise technologies like SharePoint and Exchange Server that have sold over 250,000 copies. He is a partner at Convergent Computing, a San Francisco-based infrastructure and security consulting firm specializing in SharePoint, Active Directory, Exchange, and security. The document provides an overview of Michael Noel's background and expertise in documenting and consulting on Microsoft technologies.
Module 1: Core SharePoint Concepts
Topics include: Introduction to SharePoint, Different SharePoint versions that you should consider and why SQL and Windows are so important to SharePoint. Other Topics -
• SharePoint Architecture
• SharePoint Licensing
• SharePoint Versions
• SharePoint Office 365 vs. The Cloud vs. On Site
• Intranet vs Internet sites in SharePoint.
• The role of Windows Server, SQL Server, and email servers etc.
• Directory hive in SharePoint.
• Introduction to SharePoint Central administration and Configuration wizard.
• Introduction to Tools used to Customize SharePoint.
This document provides an overview of SharePoint fundamentals including an introduction to SharePoint 2010, what it can be used for, hardware and software requirements, and the server and site architecture. It also demonstrates how to create a SharePoint web application and site collection, and how to work with lists, libraries, and pages. Key points covered include the capabilities of SharePoint Foundation 2010 vs Server 2010, prerequisites and steps for installing SharePoint, and how site columns and content types allow for reusable schemas.
Share point 2010_installation_topologies-day 2Narayana Reddy
The document provides an overview of SharePoint 2010 site and database architecture, service applications, different SharePoint topologies, and prerequisites and steps for installing SharePoint. It discusses how SharePoint sites are organized hierarchically and have separate content, configuration, and other databases. It also explains how service applications in SharePoint 2010 are more independent than service silos in previous versions, and can be deployed separately. The document outlines hardware and software requirements and gives a high-level overview of installing SharePoint in single-server, small, medium, and large farm topologies.
This document provides guidance on deploying Microsoft Office SharePoint Server 2007 in a server farm environment. It discusses the following key points:
- Server farm deployments involve multiple dedicated servers and provide better performance and scalability than a single-server deployment.
- The deployment process involves three phases - deploying and configuring server infrastructure, creating and configuring Shared Services Providers (SSPs), and deploying and configuring SharePoint sites.
- Server farm topologies can range from a small configuration with two servers, to a large configuration with clustered database servers and multiple application and frontend servers.
- Proper planning is important before deployment, including acquiring necessary credentials and accounts, installing prerequisites, and config
This document provides an overview of upgrading from SharePoint 2010 to SharePoint 2013. It discusses the supported upgrade paths and best practices for the upgrade process. Key aspects covered include pre-upgrade tasks like claims migration, content database upgrade steps using PowerShell commands, separate database and site collection upgrades, service application database upgrades for services like managed metadata and user profiles, and post-upgrade health checks. The presentation provides a detailed yet concise guide to planning and executing a successful SharePoint 2013 upgrade.
The document discusses integrating SQL Server Reporting Services (SSRS) with SharePoint. It provides an overview of SSRS and the integration with SharePoint 2007 and 2010. The key benefits of integration include centralized reporting, single point of administration, and ability to leverage SharePoint features. The document also demonstrates the installation and configuration process for integrating SSRS with SharePoint.
This document provides instructions for deploying Microsoft Office SharePoint Server 2007 in a server farm environment. It discusses:
- Deploying SharePoint in a multi-server farm configuration for performance, scalability and hosting large numbers of sites.
- Recommended server roles for small, medium, and large farms with database, application and front-end web servers.
- Requirements for accounts, software, hardware, and databases before deployment.
- A three phase process for deployment: configuring server infrastructure, creating shared services, and provisioning sites.
This document discusses SharePoint 2010. It begins with an agenda that includes what's new in SharePoint 2010, the SharePoint 2010 development primer, new developer tools, and integration with PowerShell. It then covers what's new in SharePoint 2010 including the new site structure, ribbon interface, and development tools. It demonstrates creating a new team site and using tools like SharePoint Designer. It discusses developing for SharePoint 2010 using tools like Visual Studio and PowerShell for administration.
SharePoint Saturday Kansas City - SharePoint 2013's Dirty Little SecretsJ.D. Wade
The document discusses several "dirty little secrets" about configuring and setting up SharePoint 2013. It notes that SharePoint 2013 has optional software that must be installed for certain features to work properly. It also discusses how to configure SharePoint 2013 to search Exchange and Lync messages and support Access 2013 databases, but that these require non-trivial configurations. The document outlines several other requirements for properly configuring services, workflows, and other elements of a SharePoint 2013 implementation.
The document provides an overview of upgrading from SharePoint 2010 to SharePoint 2013. It discusses supported upgrade paths, requirements, claims migration, upgrading service applications and content databases. It also covers new features in SharePoint 2013 like deferred site collection upgrades, health checks, evaluation sites, upgrade logging and throttling. The presentation includes demos of upgrading the managed metadata and user profile services as well as content databases.
The document discusses PowerPivot for SharePoint. It provides details on how PowerPivot works, including how the PowerPivot workbook is rendered. It describes components like PowerPivot Services and Analysis Services. It also covers installing and configuring PowerPivot for SharePoint farms.
Chris givens building custom service applicationsChris Givens
Chris Givens presented on custom service applications in SharePoint 2010. He discussed that service applications break services out into separate entities from the 2010 upgrade, which will convert shared service providers to service instances. He listed the various service applications available out of the box in SharePoint 2010 including Access Services, Business Data Catalog, Excel Services, and others. He also covered how to create custom service applications and extend SharePoint's service-oriented architecture.
This document discusses building mobile apps with Xamarin and Visual Studio App Center. It describes how Xamarin enables code sharing across platforms using familiar languages and libraries while still allowing access to native device functionality. It compares classic Xamarin vs Xamarin Forms approaches and outlines the features of Visual Studio App Center, which provides a unified experience for building, testing, distributing and monitoring mobile apps in one place. It includes a demo of setting up a new application in App Center.
Jethro Seghers presented on Azure Active Directory. Azure AD is a comprehensive identity and access management cloud solution that combines directory services, identity governance, application access management, and a developer platform. It allows for single sign-on, self-service access, and synchronization of identities from on-premises directories to the cloud. Azure AD supports cloud identities stored directly in Azure, federated identities using AD FS, and pass-through authentication to keep identities on-premises while authenticating to Azure.
Microsoft has been full pedal to the metal with introducing new features & services in Office 365, making the decision for when to use what quite difficult for organizations. Come explore different business use cases and how they align with core services such as Teams, Groups, and Yammer. We will talk about concepts such as audience, tone, and impact to help make informed decisions when recommending Office 365 solutions to solve business problems. Finally, we will talk about ways to help enable your business customers to adopt these services and ultimately realize the benefit of your organization's Office 365 investment.
SharePoint developers regularly face the decision, where do I put my application’s data? Sometimes this is an easy choice, using SharePoint Lists, or a SQL Server Database, but often a better solution exists. Or at least knowing that alternatives exist is beneficial, and further knowing when to use them. There are actually many storage options that both ASP.NET and SharePoint (along with modern browsers, HTML5, JavaScript) offer. This session will discuss many of these choices with best practices in mind along with live demonstrations. Examples include SharePoint Lists, Secure Store, property bags, persisted objects, Linq (to SQL, Entity, and SharePoint), web part properties, serialization options (to/from JSON and XML), session state, viewstate, httpruntime, application state, and thread bag. Also, client side storage examples will be introduced using modern HTML5 and JavaScript techniques. Further, free 3rd party products will be introduced that can be employed. Applies to all modern versions of SharePoint including 2013.
This document provides an overview and schedule for the SharePoint Saturday event being held at Rutgers University on September 20, 2014. It outlines the venue details, logistics, sessions, sponsors and prizes for attendees. The day includes five sessions ranging from 9:30am to 4:45pm, with lunch and refreshments provided. Prizes will be raffled at the end of the event for those who get their bingo cards stamped by sponsors.
Continuous Integration is a wonderful and popular practice in the software development universe. Yet, for whatever reason, it seems much less commonly utilized in the SharePoint community. SharePoint (naturally) throws a few wrinkles into the process, but no substantial roadblocks, and the benefits of CI can be realized just as well on SharePoint projects as anywhere else. In this session, you'll learn why you should implement a CI process and then see how to do it using TFS and Visual Studio.
So you’ve inherited a SharePoint environment and need it secure, ASAP. The talk explains how to do this in a methodical way, to address all the levels of SharePoint security. This is ideal for the SharePoint administrator who needs to address the server security realm and the security officer who needs to understand SharePoint security.
Like taking responsibility of relationship after commitment is important, monitoring applications after they go live is important!!!
Microsoft’s answer to this curious case is a cloud based service named Application Insights provided as part of Visual Studio Online.
In this session, we will figure out how we can analyze whether our applications are living up to the expectations from Availability and Performance point of view, how we can drive our applications towards having long life and much more fun stuff!!
Knowing the vast majority of the content accessed via SharePoint is stored in SQL Server, and also knowing an incorrect configuration of SQL Server can have a detrimental impact on the performance of SharePoint it is important to understand the integration of these two products. Regardless of whether you have a dedicated DBA, or the SharePoint administrator is also the DBA, there are critical SQL Server configurations that can be made that will improve the performance of SharePoint. Often DBA’s are familiar with how to manage SQL Server, but may not be familiar with some nuances that SQL Server has when integrated with SharePoint. In this session we will demonstrate how some default SQL Server settings negatively impact SharePoint and what changes can be made to improve the performance of SharePoint. These changes include database file settings and SQL Server instance settings. We'll also examine how to properly install SQL Server and SharePoint so they work together as efficiently as possible. This discussion will introduce the Best Practices framework that will allow your SharePoint administrator and/or your DBA to configure SharePoint and SQL Server to provide optimal performance for your SharePoint implementation
The Microsoft Office Web Applications that were once configured and managed as a service application in SharePoint 2010 are configured and managed completely different in SharePoint 2013. The Office Web Applications (OWA) are now created in an Office Web Apps farm, which allows you to create a universal Office Web App environment that can host multiple SharePoint farms, as well as communicate with Lync and Exchange servers. The OWA farm allows users to create, edit, and share content using browser-based versions of Word, Excel, PowerPoint, and OneNote. Furthermore, OWA can be configured to enhance the users search experience by providing a document preview or thumbnail that is viewable from within the search result set. This session will discuss how and why you will want to implement the new Office Web Apps and the many benefits of doing so.
Apps for Office introduces a new programming model that is so flexible, you may not believe it unless you see it with your own eyes. You might say it is dangerously simple to enhance the functionality of Office. Apps for Office allow you to enhance the user experience for Access, Excel, Outlook, PowerPoint, Project and Word, most likely using your existing skills.
Starting with a brief discussion surrounding the road-map for the various types of Office apps, this talk will focus primarily on Mail apps and how you can use them to provide very valuable enhancements to the message and appointment (reading and composing) experiences.
You will learn about what if takes to develop a Mail app (a real app, currently under development will be shown), what the infrastructure looks like to deploy a mail app, what the licensing process looks like and how easy they are to monetize.
After the discussion, you will likely be beaming with ideas and be rushing home to begin building your very own App for Office.
Exchange Server 2013, SharePoint Server 2013, and Lync Server 2013 provide integrated functionality through features like site mailboxes, eDiscovery, high resolution photos, and task synchronization. Server-to-server authentication using OAuth 2.0 allows servers to request resources on behalf of users. Site mailboxes are provisioned and managed through SharePoint and provide email access to SharePoint sites. eDiscovery searches can include content from Exchange, SharePoint, and Lync through the search service application and server-to-server trust. High resolution photos are synced across applications and services using EWS. Tasks can be synced between Exchange and SharePoint through the work management service application.
This document provides an overview and demonstration of implementing term store navigation in SharePoint 2013. The presentation agenda includes an introduction to term store navigation, why it should be used, and a live demo. Term store navigation allows creating navigation that is not dependent on the site structure, divorcing the navigation from hierarchy. It instead uses the term store to drive navigation in a way that is more natural for how people think. The presentation concludes with contact information and details on local SharePoint user group meetings.
The document discusses various business intelligence tools in Microsoft including scorecards, KPIs, dashboards, and reports. It provides descriptions and examples of each tool as well as how they can be used to measure performance, visualize data, and make informed business decisions. Links are also included for additional resources on Microsoft BI architectures, capabilities of Excel Services, SharePoint, and SQL Server for building BI solutions.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
HCL Notes and Domino License Cost Reduction in the World of DLAU
Integrating SSRS with SharePoint
1. July 26, 2014
13 Tips for Integrating SQL Server Reporting Services
with SharePoint
The Baker’s Dozen
Business Intelligence
13 SQL Server /
Business Intelligence
Productivity Tips
Kevin S. Goff
Microsoft SQL Server MVP
2. Kevin S. Goff – Brief BIO
• Developer/architect since 1987 / Microsoft SQL Server MVP
• Columnist for CoDe Magazine since 2004,
“The Baker’s Dozen” Productivity Series”, 13 tips on a SQL/BI topic
• Wrote a book, collaborated on a 2nd book
• Frequent speaker for SQL Server community events
• Email: kgoff@kevinsgoff.net
• My site/blog: www.KevinSGoff.Net (includes SQL/BI webcasts)
• Releasing some SQL/BI video courseware later in 2014
28/4/2015 13 topics for SSRS/SP Integration
3. Reporting Services with SharePoint
• More organizations moving to SharePoint as a corporate portal
• While majority of SSRS deployments are still to native (non-SharePoint) servers,
that’s starting to change
• SharePoint 2010/2013 provides the following:
– a means for users to run SSRS reports inside a SharePoint portal
– a means to have SSRS reports automatically delivered to SharePoint document areas
– a means to create a web dashboard that contains not only SSRS reports but other corporate
information
• Goals
– Deploy an SSRS report to SharePoint server
– Run the report in the SharePoint area
– Schedule reports for automatic delivery to a SharePoint Document library
– Create a dashboard page that contains (among other things) SSRS reports
• Some challenges:
– Integration between SSRS and SharePoint is different, depending on which version
– Before SQL/SSRS 2012, SSRS performance in SharePoint could be sluggish
– Linked Reports (feature in SSRS native deployment mode) is not available in SharePoint
8/4/2015 13 topics for SSRS/SP Integration 3
4. Why is this topic important?
• More organizations are moving to SharePoint as a corporate
portal standard
• While majority of SSRS deployments are still to native (non-
SharePoint) servers, that’s starting to change
• SharePoint 2010 provides the following:
– a means for users to run SSRS reports
– a means to have SSRS reports automatically delivered to their SharePoint
document areas
– a means to create a web dashboard that contains not only SSRS reports but other
corporate information
• Goals
– Deploy an SSRS report to SharePoint server
– Run the report in the SharePoint area
– Schedule reports for automatic delivery to a SharePoint Document library
– Create a dashboard page that contains (among other things) SSRS reports
8/4/2015 13 topics for SSRS/SP Integration 4
5. The Agenda
1. Installing/Configuring SQL Server 2008R2/SSRS
2. Installing SharePoint 2010 – SSRS 2008R2 Integration
3. Configuring SharePoint for SSRS 2008R2 Use
4. SharePoint 2010 SP1/2013 with SSRS 2012 (alternative to steps 2-4)
5. Setting up a SharePoint Site collection for SSRS reports
6. Deploying an SSRS Report to a SharePoint site
7. Viewing the Report Document Library after Deploying
8. Viewing the report in SharePoint using the Report Viewer
9. Automatic delivery of SSRS reports to SharePoint pages using SSRS data-
driven subscriptions
10. Integrating SSRS with PerformancePoint Services
11. Creating reports against SharePoint 2010 Lists
12. Data Alerts (SSRS 2012 Only)
13. Power View (new Data Visualization Tool, SSRS 2012 Only)
At the end, some notes on upcoming SSRS integration with Power BI
8/4/2015 13 topics for SSRS/SP Integration 5
6. 1) Installing/Configuring SSRS 2008
SQL Server 2008/2008R2 SQL Server 2012
Currently, same in SQL 2014
SharePoint 2007 • Must install separate add-in,
Reporting Services Integration
with SharePoint
SharePoint 2010 • Must install separate add-in,
Reporting Services Integration
with SharePoint
• Requires Service Pack 1 of SQL 2012
• Can be installed as a Service
application
• Slightly better performance than
SQL 2008 in SharePoint
SharePoint 2013 • Must install separate add-in,
Reporting Services Integration
with SharePoint
• Can be installed as a Service
Application
• Slightly better performance than
SQL 2008 in SharePoint, though
SharePoint 2013 requires more
memory
8/4/2015 13 topics for SSRS/SP Integration 6Back to TOC
Overall version Matrix: key point, with SQL 2012 and beyond, can run as a
SharePoint Service Application. Prior to SQL 2012, must install integration add-in
7. 1) Installing/Configuring SSRS 2008
• In a nutshell, we need to tell SSRS 2008R2 that it
will use SharePoint Integrated Mode
– Report Server (provisioning) database
– Report Server Web Service URL
• Then we need to tell SharePoint about the
instance of SQL Server and the SSRS Web
Service URL
8/4/2015 13 topics for SSRS/SP Integration 7Back to TOC
8. 1) Installing/Configuring SSRS 2008
8/4/2015 13 topics for SSRS/SP Integration 8Back to TOC
2) SharePoint
Installation
1) SQL Server
Reporting Services Installation
(SSRS Configuration Manager)
Must set Report Server
configuration
(provisioning) database
and define as SharePoint
integrated
Must define SSRS Web
Service URL
Needs to know about the
instance of SQL Server
And also the SSRS Web
Service URL
SSRS Integration add-on for
SharePoint will make sure
that all SSRS activity in
SharePoint “funnels
through” to SSRS Web
Service URL
9. 1) Installing/Configuring SSRS 2008
• Make sure to install SSRS as part of the database install
• Make sure to install SSRS for Integrated Mode with SharePoint
• This creates a database on the database server called ReportServer (or
whatever the DB winds up being called)
• The SSRS Configuration Manager contains a Database tab that allows us
to view/change the Report Server database
8/4/2015 13 topics for SSRS/SP Integration 9Back to TOC
10. 1) Installing/Configuring SSRS 2008
• In the SSRS Configuration Manager, also check the Web Service URL:
• SSRS Web Service URL (used by SharePoint 2010) contains the web
server, the default ReportServer application name (ReportServer) and the
database instance (my SQL database instance is SQL2008R2).
8/4/2015 13 topics for SSRS/SP Integration 10Back to TOC
11. 2) Installing Sharepoint 2010 & SSRS Integration
• Part of SharePoint 2010 Installation process is a set of
installer prerequisites
• Reporting Services Integration with SharePoint is part of
the installer prerequisites–installs automatically if using
Windows Server (or download manually from web)
– http://technet.microsoft.com/en-us/magazine/ff686706.aspx
• If installing SharePoint 2010 using Windows 7 (for
development), must install manually:
– http://go.microsoft.com/fwlink/?linkid=192588
– Nice for testing, but must be running 64 bit
– http://www.codeproject.com/KB/sharepoint/Install_SP2010_on_Win_7.aspx
• Option for Windows 7/8 gone in SharePoint 2013
8/4/2015 13 topics for SSRS/SP Integration 11Back to TOC
12. 3) Configuring SharePoint 2010 for SSRS 2008 Use
• In SharePoint Central Administration, go to General
Application Settings and then into Reporting
Services....this got created from the prerequisite install
Only appears if Reporting
services pre-requisite
installed
8/4/2015 13 topics for SSRS/SP Integration 12Back to TOC
13. 3) Configuring SharePoint 2010 for SSRS 2008 Use
• Go to Reporting Services Integration
– Specify the Web Service URL (from the SSRS configuration), plus
credentials from an Administrator Group
8/4/2015 13 topics for SSRS/SP Integration 13Back to TOC
14. 3) Configuring SharePoint 2010 for SSRS 2008 Use
• Go to “Add a report Server to the Integration”
– Specify your server name, database instance, and Admin or
other service account for the server – must be a Domain Account
– SharePoint 2010 and SSRS 2008 are now talking to each other!
8/4/2015 13 topics for SSRS/SP Integration 14Back to TOC
15. 3) Configuring SharePoint 2010 for SSRS Use
• In SharePoint 2010, if
we want to generate
report output to
HTML pages on a
schedule and allow
users to view them in
the browser….
• Must set this option
in Web Application
General settings
(Strict is the default,
must change to
Permissive)
8/4/2015 13 topics for SSRS/SP Integration 15Back to TOC
16. 4) Using SSRS 2012 as a SharePoint Service App
• Can install SSRS 2012 as a SharePoint Service Application
• Requires SQL Server 2012 (recommend Service Pack 1)
• Also requires SharePoint 2010 SP1 or SharePoint 2013
• Benefit – slightly faster SSRS performance, fewer integration
points to worry about
• Good link that covers the steps:
http://msdn.microsoft.com/en-us/library/jj219068.aspx
• Run following PowerShell commands:
– Install-SPRSService
– Install-SPRSServiceProxy
– get-spserviceinstance -all |where {$_.TypeName -like "SQL Server Reporting*"} | Start-
SPServiceInstance
• Then go into Manage Service Applications and create a new SSRS Service App
8/4/2015 13 topics for SSRS/SP Integration 16Back to TOC
17. 5) Setting up a SharePoint Site collection for SSRS
• Create a Site Collection (TestSSRSSite)
– Go to Central Administration
– Go to Create Site Collection, call it TestSSRSSite
• Go to the site collection
• Create a new document library (as a report library for
deployed reports)
• Create a second document library (for Generated
Reports) – will set up a report schedule to deliver output
to this folder
8/4/2015 13 topics for SSRS/SP Integration 17Back to TOC
18. 6) Deploying SSRS Reports to a SharePoint site
• In the SSRS project properties, set deployment URLs
At minimum, need to provide:
• TargetDataSourceFolder
• TargetReportFolder
• TargetServerURL
TargetDataSetFolder and
TargetReportPartFolder are
optional – only if you’re
using Shared Datasets and
Report Parts
Sadly, this dialog doesn’t expand.
Makes it a bit difficult for long URLs
8/4/2015 13 topics for SSRS/SP Integration 18Back to TOC
19. 7) Viewing List of Deployed Reports
• Deployed reports in SharePoint, in Report Doc Library
• Data Sources document library contains data sources
8/4/2015 13 topics for SSRS/SP Integration 19Back to TOC
20. 8) Viewing Deployed reports
• Viewing report…can even click on state to launch new report
Toolbar similar to Toolbar in
native SSRS mode, but
w/additional option for Alerts
8/4/2015 13 topics for SSRS/SP Integration 20Back to TOC
21. 8) Viewing Deployed reports
• Shows vendors by state based on prior report
8/4/2015 13 topics for SSRS/SP Integration 21Back to TOC
22. 8) Viewing Deployed reports
• Demonstrates sparklines and performance gauges
8/4/2015 13 topics for SSRS/SP Integration 22Back to TOC
23. 9) Automatic delivery of reports
• Can run report schedules - send output to user Document
Libraries
• Uses SQL Server Agent – make sure Agent is running
• Create report shared schedule
– Site Actions/Site Settings/Reporting Services-Manage Shared Schedules
– Add a new schedule (actually writes a job entry in SQL Server Agent)
• Modify report data source to store credentials securely on the
server (for unattended execution)
• Create a new subscription for the report
– Go to Report/Manage Subscriptions/Add subscription
– Deliver to SharePoint Document Library (Generated reports)
– Set output format
– Associate with the report schedule
– Assign any parameters (can’t used Linked Reports)
– You may want to schedule an execution snapshot as well
8/4/2015 13 topics for SSRS/SP Integration 23Back to TOC
24. 9) Delivery through Data Driven subscriptions
• Instead of creating subscriptions manually, we can
populate a relational control table with entries we’d
otherwise provide manually
• SharePoint interface will prompt us for necessary fields
• Once again, the source of data (as well as the data source
for the relational control table that contains the
subscription information) must have credentials securely
on the server (for unattended execution)
• This can be a time saver – if you have a large # of
recipients
• Also dynamic – will pick up changes when we insert new
rows to the relational control table
8/4/2015 13 topics for SSRS/SP Integration 24Back to TOC
25. 9) Data Driven subscriptions
T-SQL code to create tblSubscriptionFileShare, to store data-driven subscription information
8/4/2015 13 topics for SSRS/SP Integration 25Back to TOC
Key value that Parameter expects:
gets difficult when dealing with
OLAP parameters
26. 10) SSRS reports inside PerformancePoint Server
• If you’re using PPS to create SharePoint dashboards
against analytic databases (using SSAS OLAP or SSAS
Tabular), you can devote specific dashboard pages
to SSRS reports
• Can seamlessly integrate deployed SSRS reports,
and take advantage of hierarchical PPS dropdown
filters
• We can place them in specific PPS page zones –
arguably works cleaner than web parts
8/4/2015 13 topics for SSRS/SP Integration 26Back to TOC
27. 10) SSRS reports inside PerformancePoint Server
• By default, the
dropdown list doesn’t
allow expand or
collapse
• We could use web
parts and SSAS filters,
but they have
flexibility issues, and
require modifications
to the source report
• Instead, we can bring
the report into a PPS
dashboard page
(which can be
deployed back to
SharePoint) and use
much better
dropdowns
8/4/2015 13 topics for SSRS/SP Integration 27Back to TOC
28. 10) SSRS reports inside PerformancePoint Server
• End result: we can bring in an SSRS report to a PPS Dashboard page
• We can link it to year in the KPI scorecard, and from a hierarchical dropdown filter
8/4/2015 13 topics for SSRS/SP Integration 28Back to TOC
29. 10) SSRS reports inside PerformancePoint Server
• Must create a PPS report
as an SSRS report type
• Must specify the report
server URL, and the
location of the report
• (No help with discovery –
have to provide the URLs
ourselves)
• Might need to _vti_bin as
part of the Report Server
URL (sometimes requires
some trial and error)
SharePoint 2013
has ability to
browse for the
report
8/4/2015 13 topics for SSRS/SP Integration 29Back to TOC
30. 10) SSRS reports inside PerformancePoint Server
• Must create a PPS
filter against an OLAP
source
• Uses MDX
Descendants function
to get everything
from the top level in
the geography, down
to the City level, and
everything in
between
8/4/2015 13 topics for SSRS/SP Integration 30Back to TOC
31. 11) SSRS reports against SharePoint 2010 Lists
• SSRS 2008R2 now offers a direct data source type for
SharePoint 2010 Lists
• No need to specify ASMX, only need to specify site
collection
• Much easier to specify the specific list
Core Site
Collection
Select
custom
list
Make
sure
to
set
8/4/2015 13 topics for SSRS/SP Integration 31Back to TOC
32. 12) Data Alerts in SharePoint 2010 with SQL 2012
• Data Alerts – can set up rules for report executions, to notify
users of data changes
• Alerts can go to email addresses
• Only available when using SSRS 2012 (or higher) and
SharePoint 2010 (or higher). Not available for “native mode”
SSRS environments without SharePoint
• Relies on SQL Server Agent – must have agent running
• SMTP4Dev – simple Email server from CodePlex
• Won’t send email anywhere – great for
development/testing
• Configured for local Server
• Must specify server in:
• http://win-f44mi1754cm:17225/_admin/globalemailconfig.aspx
• Also specify mail settings in SSRS Service app or
configuration settings
8/4/2015 13 topics for SSRS/SP Integration 32Back to TOC
33. 13) Power View-new SSRS 2012 tool w/Sharepoint
• New Data Visualization Tool in SharePoint for SQL 2012
• Works against SSAS 2012 Tabular Models, Deployed
PowerPivot models
– Support for traditional SSAS Multi-dimensional OLAP
databases added in SQL Server Cumulative Update 4 for SQL
Server 2012 Service Pack 1
• End-user reporting tool, visual subset of SSRS
• Nice capability for storyboarding capability
8/4/2015 13 topics for SSRS/SP Integration 33Back to TOC
34. Power View visualization
against the Power Pivot
Data Model
User can
filter on
Country
– State
Province
Scatter chart
plotting city
observations of
Sales revenue and
# of orders
Can use year as “Play
axis” to show that while
Beaverton is top city in
Oregon across all
years, it wasn’t top city
in 2007
13) Power View-new SSRS 2012 tool w/Sharepoint
8/4/2015 13 topics for SSRS/SP Integration 34Back to TOC
35. We can even select a single city
and plot the progression of
annual sales for a city over time
While this has nice interactive features, advanced users
might want to show a linear regression line, and also the
correlation coefficient (impact of order count on sales)
Here is where tools like SSRS or even Excel Pivot Charts
are a better option – Power View does not have these
features
13) Power View-new SSRS 2012 tool w/Sharepoint
8/4/2015 13 topics for SSRS/SP Integration 35Back to TOC
36. Cross filtering – I can click on the
pie slice for Australia, and the
bar chart above shades the
monthly sales just for Australia
13) Power View-new SSRS 2012 tool w/Sharepoint
8/4/2015 13 topics for SSRS/SP Integration 36Back to TOC
37. Coming soon!!! SSRS with Power BI
• Power BI sites (in the cloud)
• Next release (later in 2014) will be able to “connect
back” to on-premises data sources
• SSRS reports deployed to Power BI site can use on-
premises data sources through a Data Gateway
8/4/2015 13 topics for SSRS/SP Integration 37
38. Coming soon!!! SSRS with Power BI
8/4/2015 13 topics for SSRS/SP Integration 38
Power BI
Site “in the
cloud”
Local
SSRS
project
Deploy project
to Power BI site
Company on-premises
database
Secured Data
Gateway, so cloud
reports can access
on-premises data
Users access
reports in
Power BI
Dashboards
Follow the blog of Chris Webb for details
http://cwebbbi.wordpress.com/