Quantcast
Channel: Visual BI Solutions
Viewing all 989 articles
Browse latest View live

Implementing LOD in Tableau, Power BI and QlikSense

$
0
0

Level of Detail Expressions (LOD) enables a user to achieve deeper insights into data. Understanding LOD can be a bit tricky. In our previous blog, we focused on understanding the concept of ‘Include LOD’, its benefits and how to implement it in Tableau.

Let’s explore more on how to implement ‘Include LOD’ in Power BI and QlikSense using respective native functions by using the same scenario as explained in our previous blog.

To explain the functionality, we will utilize the below image. It shows the ‘Include LOD’ output for ‘Average Sales per customer’ for each state.

implementing-lod-tableau-power-bi-qliksense

Image 1: Implementing LOD in Tableau, Power BI and QlikSense

 

Finding Average Sales per customer in every state using ‘QlikSense ’

‘Aggr’ is a very powerful function in QlikSense. The function creates a temporary table with measureand dimensions. This functionality is similar to ‘Group By’ in SQL.

Syntax of the function is as follows:

Aggr(Aggregate Expression, Dimension 1, Dimension 2 … Dimension N)

  1. Create a table chart as shown below (In the below image average sales is computed for each ‘State’)
implementing-lod-tableau-power-bi-qliksense

Image 2: Implementing LOD in Tableau, Power BI and QlikSense

 

2. Add the following expression as a measure:

Avg(Aggr(Avg(Sales),State,[Customer ID]))

The Aggr() expression creates a temporary table in the backendwith State, Customer ID and Avg(Sales). Therefore, the Avg(Sales) is computed at State andCustomer ID level. The Avg(Sales) from the temporary table result is displayed in Image 3. ‘Average Sale per Customer ID’ for each state is now created.

implementing-lod-tableau-power-bi-qliksense

Image 3: Implementing LOD in Tableau, Power BI and QlikSense

 

Now you can see that the ‘Average Sale per customer ID’ values in Image 3 and tableau output Image 2 are the same.

We have now implemented ‘Include LOD’ in QlikSense using Aggr function.

 

Finding Average Sales per customer in every state using Power BI

  1. Build a Tabular chart as shown below
implementing-lod-tableau-power-bi-qliksense

Image 4: Implementing LOD in Tableau, Power BI and QlikSense

 

We can utilize Quick Measures feature in Power BI to generate DAX query for implementing LOD to get Average Sales per Customer for each state.

2. Right click on the Sales measure under fields menu and select Quick measures

3. Set the configurations as shown below in Image 5

implementing-lod-tableau-power-bi-qliksense

Image 5: Implementing LOD in Tableau, Power BI and QlikSense

 

As we are calculating the ‘Average Sales per Customer ID’, we select ‘per category’ option from the Calculation drop down. The aggregate we are using here is ‘average’ which is used as the base value. The category, in this case, is Customer ID. The Average Sales is now computed ‘per category’ (in our case Customer ID) in addition to the dimensions already present in the chart. Therefore, the ‘Average Sales per Customer ID’ for each state is calculated.

implementing-lod-tableau-power-bi-qliksense

Image 6: Implementing LOD in Tableau, Power BI and QlikSense

 

From the above image, we can find the DAX query generated for the configuration set under Quick measure. Under the fields menu, the created measure is listed. Using the measure in our chart we get the following output.

implementing-lod-tableau-power-bi-qliksense

Image 7: Implementing LOD in Tableau, Power BI and QlikSense

 

Comparing the values in Image 7 and Image 2 we can find that the values are the same. Therefore, we have implemented ‘Include LOD’ in Power BI to find ‘Average Sales per Customer’ in each state.

 

Learn more about our Microsoft Power BI offerings here. 

Subscribe to our Newsletter

The post Implementing LOD in Tableau, Power BI and QlikSense appeared first on Visual BI Solutions.


Storing Images (BLOB) in SAP HANA using Python

$
0
0

In this blog, we will demonstrate how to convert an image into a BLOB object using Python and store it in the HANA database.

BLOB is the acronym for Binary Large Object File (BLOB) which can store images/other files in the binary format up to a maximum of 2 GB. The functionality demonstrated here can also be extended to other BLOB objects such as audio, video and pdf files.

 

Steps

  1. Install Python along with the required SAP libraries (Refer to GitHub repository for more details https://github.com/SAP/PyHDB)
  2. Establish a connection between Python and HANA DB
  3. Convert object using Python
  4. Insert the BLOB object into a HANA table
  5. Consume the HANA table/view in SAP Analytics Cloud

In the example below, the SAP Analytics Cloud application has been developed to analyze the details of a customer.

 

Customer details, logos/images have been taken from the HANA table where they are stored as BLOBs.

storing-images-blob-in-sap-hana-using-python-1

 

Implementation details

In order to consume BLOBs in SAP Analytics Cloud (SAC), we require it to be of the format ST_MEMORY_LOB. Hence, the table also needs to be defined in the following format;

(CREATE COLUMN TABLE <SCHEMA_NAME>.<TABLENAME>(<COLUMNNAME1> <DATATYPE> PRIMARY KEY, <COLUMNNAME2> BLOB ST_MEMORY_LOB);

After creating the table, the connection from Python to HANA DB needs to be established using its corresponding host/port number and credentials. This connection can then be verified by triggering a dummy SQL query.

Now, the required image file can be converted to BLOB in Python by just reading it.

In this example, ‘Hana1.png’ is converted using a read statement. ‘binaryData’ contains the image in the encoded Binary Format.

storing-images-blob-in-sap-hana-using-python-1

 

Create a connection to HANA using ‘pyhdb.connect()’ along with the required connection details. Then we can define a cursor in HANA and leverage it for executing the necessary commands.

Cursor.execute(‘INSERT INTO <SCHEMA_NAME>.<TABLE_NAME> Values (‘Hana1.png’,binaryData)’)

‘Hana1.png’ refers to the filename and acts as the primary key while ‘binaryData’ is the corresponding BLOB. Once the execution is successful, we can commit this cursor to insert a row in our HANA table.

Now, we have successfully inserted and stored an image into the HANA table. This table can be joined with other tables/views and consumed in SAP Analytics Cloud using HANA live connectivity.

 

Read our other blogs on SAP HANA here.

Learn more about Visual BI Solutions  SAP HANA services offerings here. 

Subscribe to our Newsletter

The post Storing Images (BLOB) in SAP HANA using Python appeared first on Visual BI Solutions.

Conditional Formatting on Visual Charts using Parameters in Tableau

$
0
0

There is always a need for the user to take quick decisions based on the visuals obtained. Color indicators will help users achieve this goal. This blog explains how users can use conditional formatting in Tableau to arrive at better decisions.

Conditional formatting in Tableau can be achieved with the use of ‘Parameters’.

‘Parameter’ can act as a placeholder for a value on which a condition will be called.

Scenario

To understand which of the ‘Product Subcategories’ are not on the ‘Profit Margins’ which the company needs to achieve using Tableau.

Solution

1. Pull ‘Profit’ measure as an aggregated sum and place the dimension ‘Sub-Category’ under rows

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 1

2. It results in a visual as shown below,

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 2: Chart Visual

 

Now we need to understand a way that could help us highlight the subcategories that are not within the Profit Margins.

To achieve this, we will need to follow the below steps:

1. Let’s start creating a ‘Parameter’ using the below value and selection.

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 3: Threshold

2. We will name the ‘Parameter’ as ‘Threshold’ to act as a base condition, on which the visuals will change color

3. Now, create a calculated field with a simple ‘IF case’ scenario to highlight the ‘Sub-Categories’ that are not performing well and place them with text indicators

4. Create a new ‘Calculated Key Figure’ and name it as ‘High-Profit Subcategories’ and place the below-shown condition,

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 4: Key Figure Condition

5. Now drag the ‘Calculated Key Figure’ into ‘Marks’ and place it under Text. It results in a visual as shown below,

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 5

 

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 6 – Key Figure

6. To get the threshold as a dynamic input field right click on the threshold and select the below option as shown in Image 7

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 7

7. It results in output as shown in Image 8

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 8: User Input

Finally, the user will be able to get a simple yet useful visualization analysis as shown in Image 9

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 9: Conditional Formatting in Chart Visual

Alternatively, we can also showcase the same functionality using a reference line:

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 10: Conditional Formatting

Right click on the X-Axis and choose the option Edit Reference Line. Place the below-shown options:

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 11: Threshold Reference

Use the same ‘Calculated Key Figure’ created before named, ‘High-Profit Subcategories’ and drag it to color under Marks.

The Final Result will look like Image 12

Conditional Formatting on Visual Charts using Parameters in Tableau

Image 12: Final Image

Hence we have achieved conditional formatting on Chart Visuals in Tableau.

Read more on Tableau blogs here.

Subscribe to our Newsletter

The post Conditional Formatting on Visual Charts using Parameters in Tableau appeared first on Visual BI Solutions.

SAP Data Hub – An Introduction and Installation of Dev Edition

$
0
0

Data Hub is a strong data management & orchestration tool for data integration, data processing, and data governance. Data orchestration is composed of reusable pipelines, configurable operations to process data pulled from a variety of sources, including CSV files, XML, web services APIs, hybrid cloud services, and SAP data stores like HANA, BW ABAP Data flow, etc. This blog gives a detailed introduction to SAP Data Hub and Installation of Dev Edition.

Advanced operations can be achieved using analytics or machine learning libraries such as TensorFlow, or custom-coded tasks in datahub.

SAP Data Hub is available in two different editions:

  1. SAP Data Hub – Developer Edition
  2. SAP Data Hub – Trial edition

 

SAP Data Hub, Developer Edition

SAP Data Hub developer edition was first delivered at the end of 2017. Its latest version 2.4 is now available for download.

SAP Data Hub can be installed on any platform that supports Kubernetes.

This includes managed cloud services:

  1. AWS (EKS), GCP (GKE), Azure (AKS)
  2. Private cloud
  3. On-premise installations like SUSE CaaS Platform

SAP Data Hub developer edition can be installed on your local computer with the help of Docker container. SAP Data Hub packaged them together with Hadoop Distributed File System (HDFS), Spark and Livy into a single Docker container image. This container image can be used to initiate options such as SAP Vora Database, SAP Vora Tools, SAP Data Hub Modeler or HDFS, Spark.

 

Limitations of installing SAP Datahub Developer edition on your local computer are:

  1. Data governance and workflow features not being available
  2. Currently, we are facing an issue with using operators in SAP Data Hub developer edition
  3. Operators related to machine learning like TensorFlow and image processing operators OpenCV currently cannot be used in SAP Data Hub developer edition
sap-data-hub-introduction-installation-dev-edition

SAP Datahub Architecture

 

Pre-requisites and hardware requirements

Before getting started with SAP Data Hub Developer Edition installation, please ensure, that the following prerequisites and hardware requirements are met in your local computer.

 

Hardware requirements

  1. 64-Bit Processor with Intel/AMD instruction set “X86_64”
  2. At least 2 CPU Cores (better: 4 Cores) for the purpose of the Developer Edition.
  3. At least 8 GB of RAM for the purpose of the Developer Edition
  4. At least 10 GB disk space for running docker image
  5. Internet Connectivity (temporary)

 

Software requirements

  1. The operating system must support the installation of the Docker (http://www.docker.com)
  2. Docker is available for Windows, MacOS, and Linux

 

Docker installation

Docker is a computer program that performs operating-system-level virtualization. Docker is used to run software packages called containers. Docker provides seamless integration with the Windows operating system.

Please download the docker for Windows with the below link:

https://hub.docker.com/editions/community/docker-ce-desktop-windows

Docker Desktop for Windows is a Docker designed to run both Windows and Linux Docker containers. However, Datahub Developer edition requires the Docker to be switched to “Linux Containers” mode.

Test whether the docker is working properly by running the below command in Linux:

Docker run Test

 

Obtaining SAP Data Hub Developer Edition

Download the Developer Edition with below link and unpack the archive into your local disk:

https://www.sap.com/developer/trials-downloads/additional-downloads/sap-data-hub-developer-edition-15004.html

 

Building Container Image

Steps to build a container image:

  1. Open a terminal window and switch to the directory where you have unpacked the Developer Edition
  2. Issue the command for creation of the base image docker build –tag sapdatahub/dev-edition-base:15.0-01 -f dev-edition-base.Dockerfile
  3. Issue the command for creation of the final image docker builds –tag sapdatahub/dev-edition:2.3

 

Running SAP Data hub Developer Edition

Run the below command to get more information about the usage of the developer edition:

docker run -ti sapdatahub/dev-edition:2.3

 

Supported commands:

Command Purpose
run start the SAP Datahub processes in the container
run-hdfs starts processes related to HDFS and Spark/Livy in container
prompt starts into the bash shell and start further processes manually
network performs a network check for accessing public internet sites
  1. The minimal set of parameters to spin up the Developer Edition as container is: docker run sapdatahub/dev-edition:2.3 run –agree-to-sap-license
  2. Run a Docker container
    docker network create dev-net
  3. Followed by (for Linux, Mac)

sap-data-hub-introduction-installation-dev-edition

 

or for Windows

sap-data-hub-introduction-installation-dev-edition

 

Launch SAP Data Hub modeler by running this URL: http://localhost:8090 

 

Start and Stop Data Hub

To start and stop the data hub, use the below commands:

Hostname “Devedition” is not a mandatory name, you can give name based on your needs,

               Docker start Devedition

               Docker Stop Devedition

 

Running HDFS

User can launch HDFS by running the below commands for accessing the Apache Hadoop user interface as shown in the below image:

sap-data-hub-introduction-installation-dev-edition

 

or for windows

sap-data-hub-introduction-installation-dev-edition

 

Launch HDFS by running the below URL:

http://localhost:50070

 

Quick cockpit view of SAP Data Hub Developer Edition

Data Hub consists of below user interface for navigating, creating pipeline and workflow,

sap-data-hub-introduction-installation-dev-edition

 

References:

Installation of SAP Data Hub Dev Edition:

https://developers.sap.com/tutorials/datahub-docker-v2-setup.html

 

Limitation of SAP Data Hub Dev Edition:

https://blogs.sap.com/2017/12/06/sap-data-hub-developer-edition/

https://blogs.sap.com/2017/12/06/faqs-for-sap-data-hub-developer-edition/

 

Read more blogs related to SAP here.

Subscribe to our Newsletter

The post SAP Data Hub – An Introduction and Installation of Dev Edition appeared first on Visual BI Solutions.

Performance Analyzer Feature in Power BI

$
0
0

It is vital to have a performance-optimized dashboard. The new release of Power BI is out with a new feature that helps us in understanding how the performance of the dashboard can be optimized. This feature can be easily toggled on or toggled off based on user need. We simply need to check the below option to enable the Performance Analyzer pane to be opened.

performance-analyzer-feature-power-bi

Image 1 – Option Trigger

 

We should obtain the below pane alongside our dashboard page.

performance-analyzer-feature-power-bi

Image 2: Performance Analyzer Pane

 

1. Start recording to enable the Performance Analyzer to get triggered for our analysis

2. Either select a specific component to be refreshed or click on Refresh visuals to get all details of all components in the dashboard

performance-analyzer-feature-power-bi

Image 3- Performance Analyzer Pane Detail

 

3. The performance analysis is done on three parameters:

  1. DAX Query– The length of time it takes for Analysis Service to run the query
  2. Visual Display –How long it takes for the visual to draw them on the screen (including anything like retrieving web images or geocoding)
  3. Other– Covering Background Processing like Preparing Queries, Fetching result sets

To get a more detail view on the component analysis we have the option Copy querywhich we can paste on a notepad or export the report in JSON format using the Export Option.

performance-analyzer-feature-power-bi

Image 4 – Performance Analyzer Pane Detail Full

 

4. We can review the DAX code to see if any changes can be done to optimize the performance of the dashboard

performance-analyzer-feature-power-bi

Image 5 – JSON Image

 

This feature is currently available only on the Power BI Desktop, and is yet to see more detail insights in the future releases.

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

Subscribe to our Newsletter

The post Performance Analyzer Feature in Power BI appeared first on Visual BI Solutions.

Setting the Scope of Table Calculations in Tableau

$
0
0

Table Calculations in Tableau can be used to transform data in a visualization to perform comparative analysis, analyze trends over time, ranking, etc. These calculations are based only on the data that is currently in the visualization. Some examples of Table Calculations in Tableau are ‘Rank’, ‘Running Total’, ‘Percent Difference From’ & ‘Moving Calculation’. These options are available in the ‘Calculation Type’ of a Table Calculation.

It is also important to set the scope of these calculations. Setting the scope is required for instance, in a scenario where the sales in a month needs to be ranked with respect to sales in the quarter or year. This blog will focus on how to approach similar scenarios with the ‘Compute Using’ feature in Tableau. This is a feature that is used to set the level at which computations must take place. It is used in various table calculations.

 

Let’s try to understand its use with examples.

Scenario: Rank the sales in each month with respect to its year

  1. Build the following chart

setting-the-scope-of-table-calculations-in-tableau

 

2. Click on the SUM(Sales) green pill -> Quick Table Calculation -> Rank. As seen in the image below, we get the ranking of each month’s sales in comparison with the other monthly sales values of the year.

setting-the-scope-of-table-calculations-in-tableau

 

Scenario: Rank the sales in each month with respect to all months in all four years

The rank of sales is computed along rows and then columns, that is, the rank of each month-year is calculated first across months of a year and then across months of other years.

setting-the-scope-of-table-calculations-in-tableau

 

This could be a little confusing. To understand how the computation takes place, click on Sum(Sales) -> Edit Table Calculation. The highlighted area shows how the computation is done. Here it is done at the whole table level.

setting-the-scope-of-table-calculations-in-tableau

 

If Table(across) is selected, the computation is done across, that is, across months of each year.

setting-the-scope-of-table-calculations-in-tableau

 

Now we have successfully used the ‘Compute Using’ feature in Tableau and transformed the data as desired.

 

Check out our other blogs on Tableau here.

To learn more about Visual BI’s Tableau Consulting & End User Training Programs, contact us here.

Subscribe to our Newsletter

The post Setting the Scope of Table Calculations in Tableau appeared first on Visual BI Solutions.

Historical Data Preservation using Power BI Dataflow

$
0
0

Power BI Dataflow is a Self-Service revolution providing extensive capabilities on data preparation for the Power BI Online Service. It allows large-scale, complex and reusable data preparation to be done directly on the Power BI Service and store it in ‘Azure Data Lake Storage Gen2’.

 

Power BI Dataflow

Power BI Dataflow is user-friendly and leverages the Power Query Editor in Power BI. With Power BI Datasets, we can import large amounts of data and schedule it for frequent refreshes in the service. However, loading large historical datasets in Power BI without ETL and Data Warehousing is always tedious. We should not expect agile performance when loading large datasets in Power BI. To overcome this, Microsoft provides us with an excellent solution called Power BI Dataflow. It can handle large volumes of data preparation by leveraging the ‘Azure Data Lake Storage Gen2’ which is designed for even the largest datasets.

Each table/query here is stored as entities inside the dataflow and each entity can be scheduled for incremental refresh, independently. Power BI can consume any number of dataflows as data sources for creating datasets, reports, and dashboards.

 

Let’s take the scenario of storing historical data without ETL and Data Warehousing. Using Power BI datasets, the only way is to dump the entire dataset truncating the older data inside the dataset. Then schedule the data load refresh in the service which will start loading the daily transactional data. However, the dataset will grow each day and performance will continue to worsen as the data grows.

Whereas in a Power BI Dataflow, we can handle this in a much smoother way. Before beginning, ensure the workspace is enabled for Dataflow.

Note: Incremental refresh of entities in Power BI dataflow is allowed only for Premium users.

historical-data-preservation-using-power-bi-dataflow

 

Incremental refresh for premium users

  1. Create a Dataflow
  2. Click on Workspace -> Create -> Dataflow

Create two entities, one for storing transactional data and another for storing historical data.

historical-data-preservation-using-power-bi-dataflow

 

Entity for transactional data

Always stores data for the current year. Once the entity is created, schedule it daily as needed, so as to initiate the incremental refresh.

historical-data-preservation-using-power-bi-dataflow

M-Query for fetching only the data for the specified year

 

historical-data-preservation-using-power-bi-dataflow

Entity incremental refresh

 

Entity for historical data

Only stores the historical data for the previous year and older. This entity can be scheduled if needed or can be triggered manually once a year as needed.

historical-data-preservation-using-power-bi-dataflow

 

Incremental refresh for pro users

Power BI Dataflow adds this functionality for Premium Workspaces. For Pro users on standard Workspaces, the functionalities are limited.

For our scenario, Power BI Dataflow does not allow us to create an incremental refresh of entities for Pro accounts. So, we need to have a workaround for history preservation. Instead of creating two entities in the same dataflow, create two different dataflows with two different entities and schedule each dataflow to refresh as needed.

Create two dataflows with transactional and historical entity, respectively.

historical-data-preservation-using-power-bi-dataflow

 

Schedule the dataflows as we would do for any Power BI Datasets in the service. We can schedule the transactional dataflow every day so that it will start incremental loads. Historical data will be triggered manually.

Therefore, the Power BI Dataflow can replace traditional ETL and the Azure ELT processes, while also:

  • Reducing the refresh time
  • Creating a more reliable process
  • Reducing the consumption of data
  • Improving the performance of Power BI Reports and Dashboards

 

Learn more about our Microsoft Power BI offerings here. 

Subscribe to our Newsletter

The post Historical Data Preservation using Power BI Dataflow appeared first on Visual BI Solutions.

B4HANA 1.0 Vs B4HANA 2.0: ADSO Modelling Properties

$
0
0

Advanced DSO (ADSO) is the primary artifact for persisting data in SAP BW 7.4 onwards. It combines functions from the Infocube/DSO and provides further enhancements such as modeling on Infoobjects as well on simple fields.

Till B4HANA 1.0, ADSO model types were determined by selecting the appropriate settings shown in the below image. In addition to that, SAP provided predefined templates which when chosen, the settings will get adjusted accordingly.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

In B4HANA 2.0, the modeling screen has undergone a change. Our blog discusses each ADSO types in detail and the corresponding setting that needs to be chosen in B4HANA1.0 and B4HANA 2.0.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Data acquisition layer (including corporate memory) / Write optimized DSO

This is similar to the write optimized DSO which contains only the inbound/new table and no change log or active table. You can use this ADSO as the staging area in your data warehouse model.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Corporate memory – compression capabilities

Corporate memory enables you to store the entire history of the data for reconstruction purposes, without the need to extract it from the source system. ADSO also enables you to compress this data, thereby reducing the overall data footprint. However, you will not be able to trace the record back to its load after enabling this option.

This ADSO contains the inbound and active table and upon activation, data in the inbound table is cleared and the active table gets compressed by aggregating record based on their semantic key, which is basically overwriting the previous record with the value present in current load. For key figures, the aggregation depends on the type (Overwrite/Summation) selected in the transformation.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Corporate memory – reporting capabilities

The only difference between this ADSO and the one above is that after activation, data remains in the inbound table. This will enable you to trace records back to their corresponding source data loads.

Data is extracted from the inbound table and reporting is done on the active table.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Data warehouse layer – delta calculation / Standard DSO

This setting will enable the ADSO to behave like a Standard DSO, which means it will have a Change log table for delta extraction, Inbound/New table and active table for reporting and full load.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Data warehouse layer – data mart

The data mart option enables the ADSO to behave just like an Infocube, it does not have any Changelog. The inbound table acts like an “F table” and active table acts like the “E table” for compressed data. Upon activation of the request, the inbound table is cleared.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Planning on Infocube – like

This ADSO is modeled like the Data mart ADSO. It has an Inbound table and active table. All characteristics are marked as key fields in the active table which is a necessary requirement for planning.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Planning on Direct Update

This setting allows planning on a direct update ADSO. Data is loaded directly into the active table using DTP or an API. This DSO has an overwrite option unlike the Planning on Infocube model which avoids duplicate records.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Inventory

In order to use non-cumulative key figures, we must select “Inventory-Enabled” setting on the ADSO. As of now, this is supported for Standard ADSO’s and cube type ADSO’s.

b4hana-1-0-vs-b4hana-2-0-adso-modelling-properties

 

Know more about our SAP BW Services offerings here.

Subscribe to our Newsletter

The post B4HANA 1.0 Vs B4HANA 2.0: ADSO Modelling Properties appeared first on Visual BI Solutions.


BW/4HANA Migration – Conversion of CMOD Exit to Enhancement Spots

$
0
0

Customer exits are possibilities provided by SAP to customize and enhance standard functionality to address business requirements. Until the release of SAP BW 7.3, customer exit function for variables were programmable in transaction code CMOD (Customer Exit) under function module EXIT_SAPLRRS0_001(Exit RSR00001).

Enhancement spot – “RSROA_VARIABLE_EXIT” was introduced in SAP BW 7.4. This BADI method provided an additional option to enhance standard functionality along with EXIT_SAPLRRS0_001.

With the introduction of SAP BW4/HANA, Enhancement spot is the only option because EXIT_SAPLRRS0_001 is no longer supported. All existing CMOD exits must be converted to Enhancements spots before migrating to BW4/HANA.

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

This blog focusses on how to convert the obsolete customer exits (CMOD) to enhancement spots. The recommended approach is to use the enhancement spots and group customer exists based on the application area. This ensures easier maintainability.

Step 1: Create a function module in SE37 like the CMOD function module EXIT_SAPLRRS0_001

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

Make sure that you copy the import parameter, export parameter and changing parameter as in EXIT_SAPLRRS0_001. Your function module should look like.

1. Import Parameter

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

2. Export Parameters

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

3. Changing Parameters

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

STEP 2: In the source code tab, write INCLUDE <include name>. This program will contain the contents copied from EXIT_SAPLRRS0_001

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

Save your function module and double click the include name. The below shown pop-up box will appear and click “Yes”.

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

STEP 3: Copy the code present in the include of FM EXIT_SAPLRRS0_001 into your new include and activate it along with your Function module

STEP 4: Now go to transaction SE18 and select enhancement spot RSROA_VARIABLES_EXIT

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

Right click on implementation and select “Create BAdI Implementation”

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

You will be shown a popup with two predefined enhancement implementations. Go ahead and create a new enhancement implementation for your CMOD.

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

STEP 5: Now create a BADI implementation

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

Select the Implementation class and double click on the method IF_RSROA_VARIABLES_EXIT_BADI~PROCESS to create it. Choose “Yes” in the popup that appears.

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

STEP 6: Click on “Pattern” and enter the function module we had created earlier in the CALL FUNCTION box.

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

Adjust the import and export parameter such that it looks like the screenshot below.

bw-4hana-migration-conversion-cmod-exit-enhancement-spots

 

Activate the method and relevant BADI and enhancement implementation.

After migration, CMOD exits present in your function module will be called through the BADI.

 

References:

https://launchpad.support.sap.com/#/notes/2458521

 

Know more about Visual BI Solutions SAP BW Services offerings here.

Subscribe to our Newsletter

The post BW/4HANA Migration – Conversion of CMOD Exit to Enhancement Spots appeared first on Visual BI Solutions.

Customizing Legend Selections using VBX Script Box

$
0
0

The Legend of a Chart plays an important role in enhancing the Visualization appeal and Interactive ability of the Chart. Legends are no longer simply static indicators of the data series displayed in the chart but have grown in function to provide customizable selection options to a user viewing the chart.

The VBX suite of charts provides an option by default to de-select a data series by clicking on the legend. But some users might encounter situations where de-selecting a series on-click of legend is not desirable, they might want a different outcome such as highlighting the data series of the legend option which was clicked and de-selecting the remaining data-series (or) greying out the remaining data series. For such custom requirements, VBX Script Box is the go-to component.

customizing-legend-selections-using-vbx-script-box-1

 

The following code snippet allows you to achieve the scenario as shown in the image above with the help of the VBX Script Box:

customizing-legend-selections-using-vbx-script-box

 

Explanation for code

1. All properties and data of a chart whose legend event is to be customized are stored as a JQuery object

2. Length variable stores the number of legend options available in the chart, we will be using it as an iteration variable

3. Each legend option is stored as an array element, so, the event associated with each legend option needs to be re-written, hence the first for- loop where the legend on click event is being modified using a function. (Note: legendItemClick events are stored in an array, hence we can push events to this array as well, but for simplification, we are re-writing the first event which is defined by default. Hence, the function is assigned to legendItemClick[0])

4. ‘isShow’ variable is Boolean and is assigned true if the current series index is equal to the series index of previously clicked legend option i.e. if the same legend option is clicked twice consecutively

5.The second for- loop iterates for each series item, if – else condition checks if a series index of the current iteration is equal to the series index of a selected series item in which case it shows the series or else hides it. The ‘isShow’ checks if the same option has been clicked again and in that case, all series are shown

Likewise, this event can be customized to change the selected series color, show/hide the series, highlight the series, etc based on the user’s requirements, thus delivering the expected functionality to the BI user.

The VBX script box can be used to achieve more such customizations to any component available on Lumira Designer. Please follow the below links for a few other interesting scenarios:

Dashboard Hacking with VBX HTML Box for SAP Lumira Designer
Conditionally Format Dimension in VBX Advanced Table using VBX ScriptBox for Lumira Designer

 

Know more about Visual BI Extensions (VBX) for SAP Lumira Designer here.

Subscribe to our Newsletter

The post Customizing Legend Selections using VBX Script Box appeared first on Visual BI Solutions.

Dataflow Creation and Usage in Power BI – The Self Service ETL

$
0
0

Dataflow is the initial Data Preparation that takes place in Power BI for the report, to begin with. Power BI follows an ETL-Extract, Transform and Load process to perform the function. Power BI now brings the flexibility of ETL to be self-service through simple interface/navigation. Dataflows creation is performed inside the Power Query functionality.

dataflow-creation-usage-power-bi-self-service-etl

Image 1: Structure in Power BI Data and Reporting

 

Data Flows can be easily be created by performing the below steps:

1. Navigate to your workspace and select on Dataflow
2. Go to the +Createon the top right to bring in a dropdown
3. Select the option –> Dataflow

dataflow-creation-usage-power-bi-self-service-etl

Image 2: Data Flows Creation

 

The below mentioned 3 options will be visible inside the Dataflow creation:

1. Entities
2. Linked Entities
3. Common Data Model

 

1. Entities

An entity is a set of fields that are used to store data, much like a table within a database.

dataflow-creation-usage-power-bi-self-service-etl

Image 3: Choose an option

 

Select an appropriate entity to start the dataflow creation. The user could see a simpler and rich UI design screen, helping us to choose the data source connection we need. This is applicable for cloud, on-premise or even a simple excel sheet.

dataflow-creation-usage-power-bi-self-service-etl

Image 4: Data Sources in Power Query

 

Now, select the data source that you need to connect to your data.

dataflow-creation-usage-power-bi-self-service-etl

Image 5: Connectivity gateway for the data source

 

Choose the appropriate tables inside to fetch the data from.

dataflow-creation-usage-power-bi-self-service-etl

Image 6: Tables in Data Source

 

Once the tables are selected, we can then proceed to use the dataflow editor – ETL. This step is very similar to the initial power query we have for cleansing our data in Power BI desktop but hosts much more advanced functionalities to cleanse, refresh and schedule your data.
Once you’ve created a dataflow, you can define its ‘refresh frequency’ by setting a refresh schedule in the dataflow settings. You can also use the Advanced Editor for defining your queries in M language.

 

2. Linked Entities

Linked entities allow users to reuse data which exists in the lake, thereby allowing them to manage and organize projects with complex ETL processes, from one or multiple sources, while also letting analysts build on each other’s work.

 

3. Common Data Model

These are Microsoft Standardized Schemas for your data. Once we have finished our cleansing process we can start with the transformation mapping fields process and leverage the use of a common data model. To leverage the Common Data Model with your dataflow, click on the ‘Map to Standard’ transformation in the ‘Edit Queries’ dialog.

dataflow-creation-usage-power-bi-self-service-etl

Image 7: Mapping Fields

 

If any fields do not get mapped to the common data model fields they are pushed to be null. You can then proceed to save your dataflows. Finalize your dataflows and create ‘scheduled refresh’ for your data.

dataflow-creation-usage-power-bi-self-service-etl

Image 8: Refresh Scheduling in Power BI

 

Now you can consume the dataflows directly in Power BI Desktop and use them for your reporting and analysis.

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

Subscribe to our Newsletter

The post Dataflow Creation and Usage in Power BI – The Self Service ETL appeared first on Visual BI Solutions.

Usage of Key Influencer Visual in Power BI

$
0
0

Visual feature in Power BI makes excellent use of Machine Learning and AI capabilities to derive insights about your data. This feature was introduced in February 2019 Power BI release. Key Influencers is Power BI’s first Artificial Intelligence (AI) powered visualization.

With Key Influencers now business users can gain insights on their data further leveraging Machine Learning capabilities. This blog features how to enable this feature in Power BI.

1. To enable this feature, we will need to go to Options -> Global -> Preview Feature and select the highlighted -> Key Influencers Visual.

usage-key-influencer-visual-power-bi

Image 1: Enabling Key Influencer Visual in Power BI

 

2. The user may have to close and start Power BI Desktop again for this visual option to be shown on the Visualization panel

usage-key-influencer-visual-power-bi

 

Understanding Key Influencers

The Key Influencer is an AI Visual within Power BI. This will have two tabs showcasing the visual’s usage.

1. Key Influencer Tab: This section of the visual will help in understanding the current selection of dimension and Measure performance with respect to the measure’s use. The Key Influencers tab will display a ranked list of the individual contributing factors that drive the selected condition. Let’s say we want to Analyze Sales (in Dollars) based on Store location and Volume Sold (Gallons).

usage-key-influencer-visual-power-bi

Image 3: Selection Criteria for Key Influencer

 

Now the AI behind this visual will get triggered to get insights on the current selection of Analyze tab and Explain by scenario. Power BI will help us in understanding the visual that we have obtained by giving us an insight into the metric. For example, let’s say we want to check when is the ‘profit’ high and by how much ‘Volume (Gallons) needs to be sold’,

usage-key-influencer-visual-power-bi

Image 4: Key Influencer Chart Visual

 

We will get a clear visual showcasing the scenario and where action needs to be taken. We can also check on what level of volume sold will result in reduced Sales. Key Influencer is an amazing feature and explains the impacting points on a metric.

 

2. Top Segments: This section tends to announce to the user when the expected profit is likely to be low or high. Segments are ranked based on the percentage of records where the condition is met.

usage-key-influencer-visual-power-bi

Image 5: Segments in Key Influencer

 

When we select one particular segment, we will get a detailed insight into it.

usage-key-influencer-visual-power-bi

Image 6: Detailed View of Segment Value

 

We can toggle if we want to see Key influence or Segments only. This is achievable by using either of the below which can be found under the Analysis tab.

usage-key-influencer-visual-power-bi

Image 7: Toggling Key Influencer Tabs

 

In a nutshell, Key Influencer is an innovative feature that uses Machine Learning and AI to derive Metric or Key Performance Index. We expect Power BI to come up with even more features in its upcoming releases.

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

Subscribe to our Newsletter

The post Usage of Key Influencer Visual in Power BI appeared first on Visual BI Solutions.

7 Things to Know About Paginated Reports in Power BI

$
0
0

Paginated Reports are the SQL Server Reporting services which have come to Power BI. Through Paginated reporting, we can now print and share what we call “paginated” because they tend to be well formatted and fit well on a page.

 

1. Understanding Paginated Reports

Paginated reports are based on the Report Definition Language file in SQL Server Reporting Services. They are also called “Pixel Perfect” because you can control the report layout for each page. We can also load images and charts onto these reports. Paginated reports are best for scenarios that require a highly formatted, pixel-perfect output optimized for printing or PDF generation.

7-things-to-know-about-paginated-reports-power-bi

Paginated Report

 

2. Where to create Paginated Reports?

We can make use of Power BI Report Builder, a tool used to create paginated reports and have them published on the Power BI Service. This is a standalone tool from Power BI.
Alternatively, we can make use of SQL Server Reporting Services (SSRS). These reports are compatible with the Power BI Service. Power BI Service maintains backward compatibility.

7-things-to-know-about-paginated-reports-power-bi

 

3. Data Sources for Paginated Reports

Paginated Reports can have multiple data sources connected. It does not have an underlying data model. Report builder can directly fetch and read the data onto the reports from the server.

Currently, the below data sources are supported;

  • Azure SQL Database and Data Warehouse
  • Azure Analysis Services (via SSO)
  • SQL Server via a gateway
  • SQL Server Analysis Services via a gateway
  • Power BI Premium Datasets
  • Oracle
  • Teradata

There will be more additional sources added in the future.

 

4. Licensing for Paginated Reports in Power BI

We will need to either have a License purchased for Power BI Embedded or have a Power BI Premium – Capacity P1, P2 or P3. This is used to host the paginated reports onto the Power BI Service.

In order to use Paginated reports in your Power BI Service, you will need to do the following. In your workspace go to settings, under Admin Portal-> Capacity Settings.

Scroll down to Workloads-> Paginated Reports and turn it ON. You will need to specify a memory (capacity) provided for the paginated reports to be used on the Power BI Service.

Create a workspace and assign Dedicated Capacity by turning the toggle to ON.

 

5. Setting up subscriptions of the Reports to Users

We will need to click on the subscribe button which can be found on the top right of the Paginated Report. This will enable the option to send the Paginated Report as an email to users. You can then set the subscription frequency, body and header of the email to be sent with the pdf file of the paginated report.

 

6. Exporting options in Power BI Service

We can currently export Paginated Reports in multiple formats like Microsoft Excel, Microsoft Word, Microsoft PowerPoint, PDF, CSV, XML, and MHTML.

 

7. Limitations on Paginated Reports

  • Pinning report pages or visuals to Power BI dashboards. You can still pin visualizations to a Power BI dashboard from an on-premises paginated report on a Power BI Report Server or Reporting Services report server. See Pin Reporting Services items to Power BI dashboards for more information
  • Interactive features such as document maps and show/hide buttons are currently not possible
  • Subreports and drill through reports
  • Shared data sources and shared datasets
  • Visuals from Power BI reports
  • Custom Fonts
  • Bookmarks

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

Subscribe to our Newsletter

The post 7 Things to Know About Paginated Reports in Power BI appeared first on Visual BI Solutions.

Top 10 features: New version of Visual BI Extensions for SAP Lumira Designer

$
0
0

We are excited to give you a sneak peek into the upcoming version of our Visual BI Extensions for SAP Lumira.

In case you are not familiar with our Extensions yet, you can find some introduction material here VBX Extensions. Our Extensions are grouped into five main categories: Charts, Maps, Speciality, Utilities, Selectors

This month we will be releasing the new version VBX 2.4 and this blog offers you a preview of its highlights:

 

1. Charts

With this release, we are adding lots of new Charting capabilities, some of them are listed below:

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

Pyramid Chart

 

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

Activity Gauge

 

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

Lolli-Pop Chart

 

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

Dot Plot Chart

 

2. Maps

The tooltip in the Marker Layer of the ESRI Map can be configured to show any VBX chart with the column dimensions and z-axis measures.

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

 

3. Specialty

Analytics: Presenting a new way to observe the usage of your Dashboards by different groups in your organization. This provides insights into how your dashboard is being consumed. It will be critical in improving your dashboard design.

Gantt Chart Enhancements: We have brought lots of new enhancements on top of the Gantt Chart, now you can define conditional formatting based on different levels of your data and bring it as part of the table.

 

4. Utilities

VBX Theme: All the VBX components align with Lumira Application theme and change the application look and feel at par with the Lumira Theme. Just by changing the theme we can alter look and feel of the application, with no added effort.

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

Dashboard before transformation

 

Here you can see how the same dashboard transforms to give a radical look and feel using the VBX Theme.

top-10-features-new-version-visual-bi-extensions-sap-lumira-designer

Dashboard after transformation

 

We are looking forward to your feedback on our latest release. As you can see, there are several new additional options that we are providing for SAP Lumira Designer 2.2 as well as SAP Lumira Designer 2.3. Do note that this is just the “beginning” of our upcoming Roadmap with a lot of enhancements coming every quarter – so stay tuned for more details on these enhancements and a lot more enhancements in the coming months.

 

Know more about VBI Extensions (VBX) for SAP Lumira designer here.

Subscribe to our Newsletter

The post Top 10 features: New version of Visual BI Extensions for SAP Lumira Designer appeared first on Visual BI Solutions.

Cross Querying in Azure SQL Database

$
0
0

Azure supports cross querying in Azure SQL Database through elastic queries. Elastic queries allow us to run Transact-SQL that works with multiple Azure SQL Databases and can connect to Microsoft tools like Excel, PowerBI and other third-party tools like Tableau to query across data tiers with multiple databases. Through this feature, we can query out in- large data tiers and visualize the result in business intelligence (BI) Reporting tools.

 

Advantages of using Elastic Queries

  • Elastic queries support read-only querying of remote databases, and SQL server users can migrate applications by linking servers between an Azure SQL environment and on-premises
  • Elastic queries are available on both the standard tier and premium tier
  • We can execute stored procedures or remote functions using sp_execute _remote and push SQL parameters for execution on a remote database
  • Through elastic query, external tables can now refer to remote tables with a different table name or schema
  • According to customer scenarios, elastic queries are categorized as the following partitioning,
    • Vertical partitioning – Cross-database queries: A vertical elastic query is to query among vertically partitioned databases i.e., multiple databases that contain tables of different schema on different data sets. For instance, all tables for HR are on one database while all Finance tables are on another database. This partitioning helps one to query across or to build reports on top of tables in multiple databases
    • Horizontal Partitioning – Sharding: The process sharding is to distribute a huge volume of data having identical schema among different databases. For instance, this means distributing a huge amount of transaction table data among multiple databases for improved performance. To achieve this, elastic database tools are used where an elastic query is required to query or compile reports across multi shards

 

Elastic Queries in Vertical Partitioning

Data located in one SQL Database can be made available to other remote SQL Databases through elastic queries. The schema and structure of these databases can vary. This process is also known as scaling up.

Steps for implementation

Let’s assume that there are four databases namely HR, Finance, Products, CRM and here we will perform cross querying in Azure SQL Database. To execute the below queries, the user must have to ALTER ANY EXTERNAL DATA SOURCE permission under ALTER DATABASE permission. These permissions are needed to refer to the underlying data source.

cross-querying-azure-sql-database

 

1. Create database Master Key i.e., a symmetric key which is used to protect private keys of certificates and asymmetric keys that are available within the HR database as shown below.

cross-querying-azure-sql-database

 

2. Create a database scoped credential which is not mapped to a server login or database user but used by the database to access the external location anytime to perform an operation that requires access.

cross-querying-azure-sql-database

 

3. Create other external data sources for remote databases like Finance, Products, CRM with type as RDMS within the HR database. Here in the below image, we have created a data source for Finance but one or many data sources can be created as per the number of databases.

cross-querying-azure-sql-database

 

4. Create an external Table for Elastic Database query. For an external table, only the metadata is stored in SQL along with basic statistics about the table referenced. No actual data is moved or stored in SQL Server. Here I have created an external table for Finance with the above-created data source.

cross-querying-azure-sql-database

 

Now we can access remote database finance from HR Database. Likewise, we can create data sources and external tables for other databases also.

 

Elastic Queries in Horizontal Partitioning

Database sharding is a technique to split large databases into smaller partition across identically structured databases. These individual units are called shards which reside on a separate database. This mechanism is also called scaling out. Through this process, the data maintenance became easier.

Steps to implementation

As a prerequisite, you need to create a shard map manager along with multi shards, followed by insertion of data into the shards. For more information on the development of shards, please refer here.

Let’s take CRM Database as an instance,

cross-querying-azure-sql-database

 

Once the shard map manager has been set,

1. Create database master key and database scoped credential as shown in vertical partitioning but here the database should have the same structure

2. Create an external data source in the CRM database and pass the name of the shard map created in shard map manager to SHARD_MAP_NAME

cross-querying-azure-sql-database

 

3. Create an external table in CRM database for the usage of an elastic database query. This table contains only metadata of the table that has been referenced.

cross-querying-azure-sql-database

 

4. Now we can connect the CRM database to any third-party tools like Excel and query out the data from the remote database, namely CRMdbsrc1.

 

Limitations of using Elastic Queries

    • While using elastic queries in the standard tier, the performance over large databases will be slow
    • Elastic queries are not preferred for ETL when there is a large amount of data movement to a remote database
    • Only the Read-Only operations over remote databases are supported.

 

Learn more about Visual BI’s Microsoft Azure offerings here.

Subscribe to our Newsletter

The post Cross Querying in Azure SQL Database appeared first on Visual BI Solutions.


Understanding Certified and Shared Data Sets in Power BI

$
0
0

Power BI has brought in a new feature that lets us understand the data set that we are working with, helping us identify if the dataset is a Certified Dataset or a Verified Dataset.

This feature will help organizations to understand which is the real and true data that needs to be used on reports. The best example will be the Financial data used by financial firms or the stock market data. Datasets can be in N number of copies but only one will host the true datapoints without alterations.

This is where Power BI helps report users based on categories of users to make use of the correct data to truly gain insights to report on them.

Power BI achieves this by showcasing the concept of Endorsements.

understanding-certified-shared-data-sets-power-bi

 

To use this new feature, we need to navigate to the settings of the dataset.

understanding-certified-shared-data-sets-power-bi

 

Shared Datasets

Datasets can be shared among different users based on their permission or access level granted on the workspace. We can manually add in the users from the Superuser who creates the workspace, who can then assign the respective users with their roles in the workspace.

understanding-certified-shared-data-sets-power-bi

 

The New Experience Workspace is now needed for these datasets to be shared.

When we are creating a new workspace, we will need to set the permissions for the users. We can allow them to connect to the app’s underlying datasets via build permissions.

understanding-certified-shared-data-sets-power-bi

 

Build Permissions of Datasets in Power BI

Power BI also provides the user with Data Governance features such as the ability to provide access to datasets or limiting the users from using the datasets.

understanding-certified-shared-data-sets-power-bi

 

An admin or member for the workspace where the dataset resides can decide during app publishing on whether the users with permission for the app also get the ‘Build permission’ for the underlying datasets.

 

Control the use of datasets across workspaces:

Admins can control the flow of datasets within the organization. The Power BI admin sometimes can restrict the flow of information to other Power BI tenants.

understanding-certified-shared-data-sets-power-bi

 

Lineage Tracking

Power BI has introduced another new feature on its datasets which is called as Lineage View. This view helps us in understanding which datasets and which reports are consuming what kind of endorsed or non-endorsed datasets.

Dataset owners in Power BI will be able to see the downstream use of their shared datasets by other users through the related content pane, allowing them to plan changes.

We can also see the initial source of data from which these datasets are being created.

understanding-certified-shared-data-sets-power-bi

 

Limitations

  • Building a report on top of a dataset in a different workspace requires the new workspace experience at both ends: The report needs to be in a new workspace experience and the dataset needs to be in a new workspace experience.
  • In a classic workspace, the dataset discovery experience only shows the datasets in that workspace.
  • You can create reports in-app workspaces that are based on datasets in a different workspace. However, you can’t create an app for a workspace that contains those datasets.
  • Free users in Desktop only see datasets from My Workspace and from Premium-based workspaces.
  • If you want to add a report based on a shared dataset to an app, you must be a member of the dataset workspace. This is a known issue.
  • “Publish to the web” doesn’t work for a report based on a shared dataset. This is by design.
  • If two people are members of a workspace that is accessing a shared dataset, it’s possible that only one of them can see the related dataset in the workspace. Only people with at least Read access to the dataset can see the shared dataset.

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

Subscribe to our Newsletter

The post Understanding Certified and Shared Data Sets in Power BI appeared first on Visual BI Solutions.

SAP Analytics Cloud – Embedding Google Maps by adopting R Visualization

$
0
0

In this blog, we will see how to embed a Google Map in SAP Analytics Cloud Application. This blog will primarily focus on pinning a selected member (eg. a store) in Google Map with the help of R widget. Also, we will learn how to retrieve the dimensions that are not displayed in the chart.

The scenario is to pin the selected store from the chart on to a Google Map. You can see the embedded Google Map in action below.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

Google Maps in Action

 

Defining Chart Structure

Let us consider an example of Sales data. Include Sales (SALE_DOLLARS) measure and Store (STORENUMBER) dimension in a chart. The Store is set to display Description.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

SAP Analytics Cloud Bar Chart Structure

 

Retrieving and passing the selected store to R Visualization

Add the following script to fetch the user selected store. Since you cannot directly fetch the latitude and longitude information of the selected store ID, the selected member is passed to R Visualization which can be used to extract latitude and longitude.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

SAP Analytics Cloud bar chart – OnSelect event

 

The ID of the user selected store is stored in a global variable store_id. Then the store ID is assigned to the R variable StoreNum using the setNumber() API. Another global variable flag is used to ensure the event onResultChange which is executed only after user selection. This will be explained later.

The global variable store_id is defaulted to zero and passed to R Visualization in the onInitialization event so that the application runs without any error at the startup. The definition of global variable store_id is also mentioned in the snapshot below.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

Initializing global variable store_id on application start up

 

Initializing R Visualization Structure

Select the necessary fields that are required (i.e) Latitude, Longitude, Store, and Sales. You can visit this blog to get a detailed perspective on the R data frame and how to set it up in SAP Analytics Cloud. The dimension Store is set to display ID in the R data frame as shown in the below snapshot.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

R visualization data frame initialized to ID display property

 

As this R widget doesn’t visualize anything, you can hide it by disabling the option Show this item at view time from the Styling panel.

 

Extracting latitude and longitude using R script

Add the following R script to extract the latitude and longitude of the selected store.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

R script for filtering and extracting dimensions based on user selection

 

Here the script filters to the StoreNum input parameter. Then corresponding latitude and longitude values are extracted from the data frame and stored in R variables LAT and LONG. These R variables can later be retrieved to embed Google Map.

 

Embedding Google Map

Add a WebPage widget to the canvas which can be used to show the Google Map. Then add the following script in the onResultChange event of R Visualization to pass the latitude and longitude through URL.

sap-analytics-cloud-embedding-google-maps-by-adopting-r-visualization

Manipulating Google Maps link dynamically

 

The global variable flag confirms the script is executed only after user selection. Then the R variable values (latitude and longitude) are assigned to local variables. These local variables are then appended to the Google Map URL (https://maps.google.com/maps?q=) and set to the Webpage widget.

Now save and run the application. When you select a store, the store ID is passed to the R Visualization from the onSelect event. Once the latitude and longitude are extracted using R script, the onResultChange event is executed which sets the proper Google Map URL to the Webpage widget.

You can also plot an area or city in Google Maps (or others) applying the same logic. This blog is limited to pinning one coordinate at a time based on user selection. You can also expand the functionality to multi-select and pin multiple locations.

 

Reach out to us here today if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud – Embedding Google Maps by adopting R Visualization appeared first on Visual BI Solutions.

All About Linked Analysis in SAP Analytics Cloud

$
0
0

Dynamic interaction is a key feature that helps to explore and understand data better for any dashboard or story. SAP Analytics Cloud Linked Analysis feature allows us to perform dynamic interaction between widgets. Earlier, each story page has a single linked set, upon which the selections can be passed down as filters. But from version 2019.11 onwards, each chart can have its own set of widgets linked. This blog features on how to fully utilize the Linked Analysis feature for better interaction.

Consider a scenario where you have a Sales Summary page that has various charts for Region, Category, Sub-category, Trend and Top 10 Products. Let us see various options used to enable dynamic interactivity.

linked-analysis-sap-analytics-cloud

Linked Analysis in Action

 

Widget-Specific Linked Analysis

Prior to version 2019.11, the option to enable Linked Analysis was available in the Toolbar. There was only a single linked set for each page. Now since the Linked Analysis is specific to each widget, you can find the option in the Quick Menu of widgets.

linked-analysis-sap-analytics-cloud

Linked Analysis option in Quick Menu

 

Linked Analysis can be enabled for the following widgets.

    1. Chart
    2. Table
    3. Geo Map
    4. Input Control

Widget as a Story Filter

In the scenario mentioned above the Donut Chart showing Sales per Region can be used to filter the entire story so that it is easy for Regional Managers to analyze their data. In the Linked Analysis panel the option ‘All widgets in the Story’ is enabled. Under Settings, there also an option to override any existing cascading effects.

linked-analysis-sap-analytics-cloud

All widgets in the story option

 

Once the user filters a member in the Donut Chart a Story Filter is automatically added. The user can manually remove the Story Filter without affecting the Donut Chart. One limitation is that there is no option to enable the filter on data point selection if the widget is used as a Story Filter.

linked-analysis-sap-analytics-cloud

Story Filter added by default

 

If you want the selection to only affect the page, you can choose the option ‘All widgets on this Page’. To enable filters on selected data point choose the option widgets as Page filters.

linked-analysis-sap-analytics-cloud

All widgets on this page option

 

Linked Analysis for Input Control

In the Sales Summary scenario, the input control to choose Years must not affect the Trend Chart. The option ‘Only selected widgets’ is enabled. Then the Trend Chart is removed from the list of widgets that can be linked. This way, selections made in Input Control will not affect Trend Chart. Unlike Charts, for all Input Controls, there are only two options in Linked Analysis. The Input Control can either affect the whole page or selected widgets.

linked-analysis-sap-analytics-cloud

Custom Linked Set

 

Filter on datapoint selection

In the bar chart ‘Sales per Category’, the common selection mode is to choose a single bar. The option ‘Filter on data point selection’ is enabled. In case of the scatterplot for Sub-categories the common selection mode is Lasso and the option data point selection is not enabled so that the user can filter the values in a scatterplot for it to affect other linked widgets.

linked-analysis-sap-analytics-cloud

Linked Analysis of Category and Sub-Category chart

 

There is also an option to automatically connect newly created widgets while configuring Linked Analysis.

Please note that Linked analysis is supported in all import and live data connections. If there are two models used in Story, make use of the Link Dimensions option which allows us to apply linked analysis for widgets from two different models.

Reach out to our team here to know more about SAP Analytics Cloud other offerings from Visual BI Solutions.

Subscribe to our Newsletter

The post All About Linked Analysis in SAP Analytics Cloud appeared first on Visual BI Solutions.

Exporting Measure and Column formulas from Power BI

$
0
0

Power BI supports extensive Data Analysis Expressions(DAX) scripting to achieve complex business scenarios. It is a good practice to maintain a document for each report which helps in future development. The Document can include screenshots of Report Pages, Purpose of the Report, Lists of Navigation, Tables and Queries and a complete listing of the DAX formulas.

All this information is easily achievable except for the DAX formulas. Using DAX users can create Measures and Columns in PowerBI.  However, there is no direct option to export the DAX formulas for documentation purposes. This blog will cover exporting the Measure and Column definitions into a CSV file from Power BI.

Steps to export the DAX formulas

1.  Navigate to Model View.exporting-measure-column-formulas-power-bi

2. Click File Menu -> Export as Power BI Template.
exporting-measure-column-formulas-power-bi

3. Exported PowerBI template will have the extension .pbit. Open the PowerBI template file as RAR file.
exporting-measure-column-formulas-power-bi

 

4. List of files in PowerBI template.exporting-measure-column-formulas-power-bi

5. Click and Open DataModelSchema in any JSON editor.There are plenty of JSON editor available  online.  You can access the below link to use the same JSONeditor used here in this blog.
https://jsoneditoronline.org/

6. Generally, JSON  is represented as a Key Value Pair. You can Navigate to the Key Value Pair of tables which will list all the  tables used in the PowerBI report.
E.g.: In the below highlighted section we use the Key as tables and Value within the set bracket[]exporting-measure-column-formulas-power-bi

 

7. For each key value pairs inside tables key have following structure. Columns and Measures created in PowerBI will have separate key value pair. Here we have just highlighted the Measure Key Value Pair.
exporting-measure-column-formulas-power-bi

 

8. After expanding the Measure key, we can get the value of each Measures. Here the value as DAX formula.exporting-measure-column-formulas-power-bi

9. Copy the value of the Key Measure.
exporting-measure-column-formulas-power-bi

10.  Paste the copied Value of the Key Measure in any of the JSON to CSV editor. You can refer this website to learn more. https://json-csv.com/

11. CSV File will have the following structure. This is the based Value of the Key MeasureIt may differ for the Key Value Pair of Column.exporting-measure-column-formulas-power-bi

 

By this we have achieved exporting Measure and Column formulas from Power BI to a CSV File.

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

 

Subscribe to our Newsletter

The post Exporting Measure and Column formulas from Power BI appeared first on Visual BI Solutions.

Inheritance of Security from SAP BW System to Power BI Part 1

$
0
0

Data security is the prime concern for any organization in today’s era of digitization. Organizational data can be categorized as who views the data and to what level of data can be viewed by them. Who – defines the users, who view the data and What defines the tool taking the data as its source for visualization.

It becomes important for users to visualize data with the right data constraints placed via security objects. SAP BW authorization objects provides this feature on restricting the data on the info-object level. Power BI has the capability to visualize and showcase the data that is needed. In this blog we will try to understand how to change the approach from traditional standard Power BI consumption to a more secured view on data. Power BI by default provides RLS (Row Level Security). When the security inheritance is from the SAP BW level the default option changes and will not be from the report level.

To demonstrate this we will guide you through a proof of concept which will furnish the restrictions on the data –Eg.: Region Specific for two users.

Let’s create two users with the authorization objects placed on them as shown below. We will consume Bex Queries from the BW HANA system.

PBIUSER1 ZREGION_APAC – Asia Region
PBIUSER2 ZREGION_OBJ1 – United States

The Traditional Approach for Viewing Reports

In the below diagram, initially in Power BI we build the Data Source with the SAP BW Data Base credentials as PBIUSER1 who has the authorization object ZREGION_APAC which views only APAC Region data.

inheritance-security-sap-bw-system-power-bi-part-1

Initial Approach on Retrieving Data via Gateway Instance

 

inheritance-security-sap-bw-system-power-bi-part-1

Query View of Data from SAP System

 

After creating the report and publish it onto the Power BI Service. When we try to view this report again it needs to initially establish a connection with the Database. This connection establishment goes via an On-Premise Gateway installed. Since we are connecting to an SAP BW system, we will need to create an instance for the Gateway to connect with the SAP BW system.

inheritance-security-sap-bw-system-power-bi-part-1

Data Source Gateway Connection Instance

 

Now let’s key in the credentials for the gateway instance as PBIUSER2. This will automatically make a change on the Database value that is being called with the necessary filter affecting PBIUSER2. Now the PBIUSER2 will only be able to see United States Data and not APAC.  An outside user who has access to the entire query from the BW system, can only have restricted data view placed by PBIUSER2. This does not fulfill the security constrains and is a failure. But we can overcome this issue with the use of Kerberos single sign-on which is supported by Power BI.

inheritance-security-sap-bw-system-power-bi-part-1

Gateway Instance for SAP BW System

 

The Image above explains the authentication/respond process: When any user tries to open the report, Power BI will initially check for the user credentials and pass the same via the SSO to the DB credentials through Kerberos implementation. We will have  to sync Azure Active Directory with our Windows Local Active Directory instance. Once the entire setup is done following the link provided below, we can check on the SSO.

https://docs.microsoft.com/en-us/power-bi/service-gateway-sso-kerberos

For a user who has all view access, his credentials will be pushed in via the SSO enabled gateway to get the entire set data. We can enable the SSO via Kerberos in the advance settings found at the bottom of the gateway instance.

inheritance-security-sap-bw-system-power-bi-part-1

Enabling SSO via Kerberos

 

Power BI has the necessary feature to help us enable the security and necessary filter to affect the database and bring the report as needed to the user by making use of the above method. For more details please go through the above link. Recommendation of use for Kerberos method can or cannot be possible depending on the architecture the organization follows. Organization Architecture needs to be first reviewed in order to determine the best way to enable SSO.

Source:

https://docs.microsoft.com/en-us/power-bi/service-gateway-sso-kerberos

https://docs.microsoft.com/en-us/power-bi/desktop-use-directquery

 

Know more about Microsoft Power BI services offerings from Visual BI solutions here.

Subscribe to our Newsletter

The post Inheritance of Security from SAP BW System to Power BI Part 1 appeared first on Visual BI Solutions.

Viewing all 989 articles
Browse latest View live