Quantcast
Channel: Visual BI Solutions
Viewing all 989 articles
Browse latest View live

Drag and Drop Components at Runtime in SAP Lumira Designer

$
0
0

With the emergence of self-service BI tools that allow users to easily modify the layout of visualizations, a feature such as drag and drop of components into the canvas at runtime is very helpful. Visual BI Extensions (VBX) has a suite of components that offer a wide variety of features that greatly enhance the visualization capabilities of SAP Lumira Designer. The VBX Script Box is one such powerful component that helps leverage the extensive capabilities of JavaScript/jQuery inside Lumira Designer. This blog walks us through the steps to drag and drop components at runtime in SAP Lumira Designer.

This solution supports both native and VBX components since the VBX Script Box consider all components as HTML elements.

Drag and Drop Components at Runtime in SAP Lumira Designer

 

Setting up the Dashboard

The example dashboard below contains one VBX Column-Bar Chart, native Lumira Designer Pie Chart and a Crosstab. Create a placeholder area with the required number of panels (or any other container component) which will be the target for the component drop.

Drag and Drop Components at Runtime in SAP Lumira Designer

 

Adding the VBX Script Box

Add a VBX Script Box to the application with the scripts given below, to enable drag and drop. You would need the IDs of the components to be made draggable and the target containers for the drop.

 

Setting the ‘Draggable’ attribute

Enabling the ‘draggable’ attribute of a component allows the user to move the component at runtime (drag operation)

$(“#__component1”).attr(“draggable”, “true”);

 

Functions for Drag and Drop

JavaScript/jQuery functions can be written to handle the three different events – drag of component, drop of component in the target and drag of component over the target.

On Drag of component

The drag(ev) function gets called when the component that is set as draggable is dragged. This sets the ID of the dragged component in the dataTransfer object using the setData method. Here, target refers to the component being dragged.

function drag(ev)

 {

  ev.dataTransfer.setData(“text”, ev.target.id);

  }

 

On Drop of component

This function gets called when the component is dropped in a target panel. The default behaviour for the drop event is prevented, the ID of the dragged component is obtained using the getData method and is then appended to the target container.

function drop(ev)

 {

  ev.preventDefault();

  var data = ev.dataTransfer.getData(“text”);

  ev.target.appendChild(document.getElementById(data));

  }

 

On Drag of the component over the target

The allowDrop function gets called when the component is dragged over the target. This just  prevents the default behaviour for the event.

function allowDrop(ev)

 {

ev.preventDefault();

}

 

Binding the functions

Now, bind the event handler functions defined above to the corresponding events of the components.

$(“#__component1”).attr(“ondragstart”,”drag(ev)”); // drag(event) function called when the component is dragged

$(“#PANEL_1_panel1”).attr(“ondrop”,”drop(ev)”); // drop(event) function called when the dragged component is dropped in the container panel

$(“#PANEL_1_panel1”).attr(“ondragover”,”allowDrop(ev)”);// allowDrop(event) function called when the component is  dragged over the container panel.

 

CSS can be used for any formatting as required if, for instance, the dropped component has to fill up the target container area. The setInterval() method can be used if you would like to proactively listen to/track drag drop events at regular intervals.

In addition to rendering an enhanced, near-self-service drag-drop experience, since the changes done using JavaScript are persistent in the exported PDF files as well, the user can download customized report extracts.

Note: The ‘draggable’ attribute is a feature of HTML5 supported in most browsers and in IE 9.0 and above.

 

Stay tuned to this space for more customizability with runtime addition of containers to the placeholder area. Learn more about Visual BI’s SAP Lumira Offerings here.

Subscribe to our Newsletter

The post Drag and Drop Components at Runtime in SAP Lumira Designer appeared first on Visual BI Solutions.


Understanding Exclude Level of Detail (LOD) in Tableau

$
0
0

In our previous blogs on LOD in Tableau we covered the concepts of Include and Fixed LOD expressions and their use cases. In this blog, we will focus on Exclude LOD in Tableau.

What is Exclude LOD?

Computations are performed for all dimensions present in the view except for the dimension(s) mentioned in the expression.

{Exclude [Dimension1], [Dimension2] … [Dimension n]: Aggregation}

Let’s understand Exclude LOD through an example.

Build the following view:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 1

Currently sales are being computed at Region and State level.

Now add city to the view:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 2

In the above image the Sales are being computed at Region, State and City level.

Create the Exclude LOD calculation and add it to the view.

{EXCLUDE [City] : sum([Sales]) }

Understanding Exclude Level of Detail (LOD) in Tableau

Image 3

The column with Exclude expression is different from the Sales column.

Since we have mentioned in the LOD calculation to Exclude City dimension when computing the sales, the sales is computed only at Region and State level. This can be verified by comparing sales value of Illinois state in Image 3 and Image 2.

Let’s look at applying Exclude LOD expression in a real-time scenario.

Scenario:

We need to find out the difference between Sales value of two Sub-categories with the flexibility of choosing the Sub-Category of our interest as the reference.

Solution:

Build a Sales by Sub-Category bar chart.

Create a Parameter with field values from Sub-Category and create a calculation as shown below:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 4

Adding above expression to the view we get following output:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 5

As mentioned in the calculation the sales of only the selected parameter value (in this case Accessories) is shown.

Create the following expression:

{EXCLUDE [Sub-Category] : sum([Selected Sub Category Sales])}

This expression does the computation by not calculating sales for every Sub-Category.

Since Sub-Category is the only dimension present in the view adding the exclude expression gives an output that would be as same as using the expression in a sheet for selected parameter without any dimensions. The output is shown below:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 6

Adding the expression to the previous view we get the following output:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 7

We can see the values are the same as in Image 6.

We can subtract the Exclude expression calculation’s output with actual sales of every category. This will show the difference between sales of Accessories Sub-Category and other Sub-Categories.

Sum([Sales])-Sum ([Exclude Sub Category])

Adding this to the view we can easily compare the difference in sales between a selected Sub-Category from parameter and other Sub-Categories.

Understanding Exclude Level of Detail (LOD) in Tableau

Image 8

In subsequent blogs, we will cover other functionalities of Tableau.

Contact us today to learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post Understanding Exclude Level of Detail (LOD) in Tableau appeared first on Visual BI Solutions.

Combining Data Sources in SAP Analytics Cloud Data Model

$
0
0

There are two ways for combining data sources in SAP Analytics Cloud. One is through Data Blending, and the other is to combine data sources at the model level at the time of model creation. In this blog, we will look at how to combine data from two different sources at the model level and the types of data sources that can be combined.

 

Combining Data

‘Combine Data’ option is available while creating a new model. This allows you to combine two to five data sources at the model level. The types of data sources that can be combined are shown in the table below:

Combining Data Sources in SAP Analytics Cloud Data Model

 

Consider a scenario where you have two different excel files as source files – the first file contains suppliers and spends information and the second file contains a ‘Y/N’ flag for each supplier to indicate if they are preferred suppliers. Create a model by importing the first file as a data source, make necessary changes to the dimensions and measures and then combine this data with the second file.

Combining Data Sources in SAP Analytics Cloud Data Model

 

Now let’s look at the steps to combine the 2 data sources at the model level:

1. Create the Data Model

First, you must import one of the source files, say ‘Spend File’, and create a new data model. You can do this by going to

Main Menu -> Create -> Model -> Import a file from your computer -> Select Source File -> Import

Select the required file.

 

2. Combine Data Option

‘Combine Data’ icon is present under Transformations menu as shown in the image below.

Combining Data Sources in SAP Analytics Cloud Data Model

 

On clicking the icon, a pop up appears, asking to select the data source.

Combining Data Sources in SAP Analytics Cloud Data Model

 

You can either get data from a file or directly from a data source. In this scenario, the second data source is also from a flat file.

Combining Data Sources in SAP Analytics Cloud Data Model

Select the source file and click on Import.

 

3. Combine Settings

Once the file has been imported, the ‘Combine Settings’ pop up window appears. Here you can choose a column for combining the two data sources.

Combining Data Sources in SAP Analytics Cloud Data Model

 

Supplier is the common dimension in both the data sources, to make a join. You can choose only one column for combining data. Once the column has been dragged and dropped in ‘Combine Column’ for both the data sources, a ‘Combine Preview’ is shown on the right-side pane. This providesa summary of a number of records that have been accepted, duplicated or omitted during the join.

Combining Data Sources in SAP Analytics Cloud Data Model

 

4. Join Type

On the top end of the pop-up window, there is a drop-down providing two options:

  • All primary data – combined data contains all primary data values (like left outer join)
  • Intersecting data only – only data that has matched the join condition is present in the result set (like inner join)

By default, ‘All primary data’ is selected.

Combining Data Sources in SAP Analytics Cloud Data Model

 

Sample data set can be viewed on the bottom of the pop-up window.

Combining Data Sources in SAP Analytics Cloud Data Model

 

5. Combined Model

On finishing all the above steps, click on the ‘Combine’ button to join the data sources. Now you can see that the columns from both the files have been combined and is available in the model.

Combining Data Sources in SAP Analytics Cloud Data Model

 

Limitations:

  • The maximum number of combined cells are 30 million.
  • The maximum number of combined rows are 1 million.
  • The maximum number of combined columns are 1 hundred.
  • For a given dataset, a maximum of five combine data processes can be run.
  • You cannot combine two datasets using a calculated column.
  • You cannot combine two datasets in a story.

Combining data sources at the model level has its own benefits. It allows you to reuse the combined data model for various stories, saving you the time of blending the models at story level every time you create a new story with the same data. This method also allows you to use the combined data in an analytical application where the only option to blend data is by scripting.

 

Reach out to us here today if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post Combining Data Sources in SAP Analytics Cloud Data Model appeared first on Visual BI Solutions.

Understanding Data Blending in Tableau

$
0
0

One of the most powerful features of Tableau is its ability to blend data. Data blending is commonly utilized for defining a relationship between tables from different data sources, but it has many more capabilities.

What does Data Blending do?

It helps in defining the relationship between tables where joins cannot be implemented. Some of the areas where blending can help are:

  1. When cross database join between two tables is not supported by Tableau data blending can be used to create a relationship between the tables
  2. When two tables have many-to-many relationship blending can be used to establish a relationship between the two tables without duplicating resultant aggregate
  3. When two tables are at different levels of detail are joined duplicate records will be created. This can be avoided using data blending which does not create duplicate records

And many more…

Let’s do some hands-on exercise to understand its significance.

Scenario (Using sample superstore dataset):

We need to compare the sales achieved in each segment and their targeted sales.

Solution:

Create an excel workbook (Segment target sales) as follows. This data source contains the target sales for each segment.

Understanding Data Blending in Tableau

Image 1

Using sample superstore build the following view.

Understanding Data Blending in Tableau

Image 2

To add the target sales to the view, add the Segment target sales excel file and join on Segment field.

Understanding Data Blending in Tableau

Image 3

Adding the target sales to the existing view we get the following output:

Understanding Data Blending in Tableau

Image 4

The target sales value is much higher than the values in target sales excel workbook.

We get this output because the data in two sources are at different level of detail.

Sample superstore contains sales aggregated for every transaction and every product whereas target sales source has sales value at segment level only.

In sample superstore dataset:

Number of records under Consumer segment – 5191
Number of records under Corporate segment – 3020
Number of records under Home Office segment – 1783

Therefore, the target sales for each segment are aggregated in the following way:

Consumer:  5191 * 1000000=5191000000
Corporate: 3020 * 2000000=6040000000
Home Office: 1783*3000000=5349000000

These values are reflected in the view as well as shown in Image 4.

To avoid the duplication of measure values we can utilize data blending.

Remove the target sales table in the data source pane.

Now add the same source through the data menu as shown below:

Understanding Data Blending in Tableau

Image 5

Understanding Data Blending in Tableau

Image 6

Now we have two data sources.

Use segment and sales from sample superstore source and targeted sales from Target sales source. You can switch between the sources by clicking on the list of sources as shown in image 6.

When switching data sources, you can notice a chain link on the segment dimension. This indicates that blending is performed between two tables based on the segment field.

Understanding Data Blending in Tableau

Image 7

From image 7 we can see the target sales as mentioned in target sales excel source.

Blending performs a left join operation between data sources.

It is also important to note that the data source from which a dimension/measure is used first in a sheet becomes the primary data source and other data sources become secondary.

In subsequent blogs, we will cover other features of Tableau.

Contact us today to learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post Understanding Data Blending in Tableau appeared first on Visual BI Solutions.

SAP Analytics Cloud – Application Design Series: 11 – Enable Explorer using Analytic Functions

$
0
0

In the previous blog of this series, we learned about the dynamic display of measures and dimensions in widgets. In this blog, we will learn how to invoke the explorer functionality in an analytic application, in SAP Analytics Cloud.

 

The Explorer

The Explorer helps you perform ad-hoc exploration and gain insights into your data. It is possible to invoke the explorer on-demand in analytic applications, with the exploration limited to pre-designated measures and dimensions.

View the data explorer in action here.:

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

 

1. Identify where to provide the functionality

Let us start with the chart below. We will add a button ‘Launch Explorer’ which will be used to invoke the explorer feature during run-time. Note that you also have the option to provide this feature at the application level – instead of at the chart level.

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

 

2. Add the Script

Launching the explorer is simple. Add the script API getDataExplorer() to the onSelect event of the button.

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

 

Save and run the Analytic Application. Select the button ‘Launch Explorer’ to launch data explorer. By default, the script opens the explorer with the same dimensions and measures, as available in the chart.

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

 

3. Add Additional Dimensions & Measures

If you would like the user to be able to view additional dimensions and measures in the default Explorer view, you can use the setAdditionalDimensions() and setAdditionalMeasures() script APIs. The following code samples add the dimension “Vendor” and the measure “Bottle Cost”:

Chart_Category.getDataSource().getDataExplorer().setAdditionalDimensions([“VENDORNUMBER”]);

Chart_Category.getDataSource().getDataExplorer().setAdditionalMeasures([“STATEBOTTLECOST”]);

Chart_Category.getDataSource().getDataExplorer().open();

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

 

At runtime, the explorer shows the additional dimension and measure that were just added.

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

 

4. Add all Dimensions and Measures

If you would like all dimensions instead, use the getDimensions() function. To add all measures, there is no direct function available currently. However you can accomplish this by using getMembers() and then create a string array of measure id’s.

var measures_1 = Chart_Category.getDataSource().getMembers(“Account”);

var measures_2 = ArrayUtils.create(Type.string);

for(var i = 0; i<measures_1.length; i++){

                measures_2.push(measures_1[i].id);

}

Chart_Category.getDataSource().getDataExplorer().setAdditionalDimensions(Chart_Category.getDataSource().getDimensions());

Chart_Category.getDataSource().getDataExplorer().setAdditionalMeasures(measures_2);

Chart_Category.getDataSource().getDataExplorer().open();

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

sap-analytics-cloud-application-design-Series-11-enable-explorer-using-analytic-functions

This will open the explorer with all the measures & dimensions during runtime.

* * *

In the subsequent blog, we will learn how to launch Smart Discovery from analytic applications.

Reach out to us here today if you are interested in evaluating if SAP Analytics Cloud is the right tool for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud – Application Design Series: 11 – Enable Explorer using Analytic Functions appeared first on Visual BI Solutions.

SAP Analytics Cloud – Application Design Series: 12 – Smart Discovery

$
0
0

In the previous blog of this series, we learned how to invoke the explorer functionality on-demand in analytic applications in SAP Analytics Cloud. In this blog, let us look at how to launch Smart Discovery from analytic applications.

 

Smart Discovery is a powerful feature that can generate a new story using the inbuilt machine learning algorithm after analyzing your data. In stories, we have Smart Discovery embedded within the data view. But in analytic applications, we would need to launch Smart Discovery – for either dimensions or measures – using scripting.

You can see Smart Discovery in action below:

 

Smart Discovery

In analytic applications, the function SmartDiscovery.buildStory() is used to launch Smart Discovery. The parameters for this function are the Smart Discovery Settings. There are two types of Smart Discovery Settings – SmartDiscoveryDimensionSettings and SmartDiscoveryMeasureSettings. Let us see both settings in detail.

 

1. Smart Discovery for a Dimension

Consider an example of a chart showing Sales across Category. When the end user selects a category, we would need to launch Smart Discovery that can compare the selected category to other categories with respect to all measures and dimensions.

sap-analytics-cloud-application-design-series-12-smart-discovery

 

Add the following script in the onSelect()event of the chart to launch Smart Discovery.

var selection = Chart_1.getSelection();

for( var i=0; i < selection.length; i++){

                if(selection[i].dimensionId === “Category”){

                                var sel_member = Member.create(selection[i].dimensionId,selection[i].id);

                }

}

SmartDiscovery.buildStory(SmartDiscoveryDimensionSettings.create(Chart_1.getDataSource(),”Category”, [sel_member]));

sap-analytics-cloud-application-design-series-12-smart-discovery

 

  • Assign the selected member of Category to a local variable sel_member.
  • Pass SmartDiscoveryDimensionSettings.create() as a parameter to the function buildStory() to launch Smart Discovery.
  • There are three mandatory parameters for the function create() – Data Source, Dimension, and Target Group.
  • Target Group is an array of members that need to be focused on.
  • In our example, we pass only one value – the selected member of the Target Group parameter.

 

Save and Run the application. Select a category and you will get a popup to launch Smart Discovery.

sap-analytics-cloud-application-design-series-12-smart-discovery

Select ‘Launch Smart Discovery for Category’ to launch Smart Discovery in a new tab.

 

In the new tab, you can change the classification groups in the Smart Discovery panel.

sap-analytics-cloud-application-design-series-12-smart-discovery

 

Select Run to generate a Story. The story has two tabs – Overview and Key Influencers. Here you can see several visualizations that focus on the selected category, comparing it with the others.

sap-analytics-cloud-application-design-series-12-smart-discovery

 

2. Smart Discovery for a Measure

Smart Discovery for a Measure will provide various insights and visualizations focusing on the given measure. Here we have a drop down that has the list of measures. The user can select a measure and launch Smart Discovery for that measure.

sap-analytics-cloud-application-design-series-12-smart-discovery

 

Add the following script to the onSelect() of the button ‘Launch Smart Discovery’.

SmartDiscovery.buildStory(

                SmartDiscoveryMeasureSettings.create(

                                Chart_1.getDataSource(),

                                Dropdown_1.getSelectedKey()

                                )

);

sap-analytics-cloud-application-design-series-12-smart-discovery

 

The function SmartDiscoveryMeasureSettings.create() has two mandatory parameters – Data Source and Measure. Please note that the measure list in the dropdown should have the key names of the measures.

 

Save & Run the application. Select the Launch Smart Discovery button. Similar to the Smart Discovery for Dimensions, a prompt will allow us to select ‘Launch Smart Discovery for Sales’ which in turn opens a new tab. Here we cannot set classification groups, although you can control the dimensions and measures of the data source in Advanced Options.

sap-analytics-cloud-application-design-series-12-smart-discovery

 

Select run to generate a Story. The story has four tabs – Overview, Key Influencers, Unexpected Values and Simulation.

sap-analytics-cloud-application-design-series-12-smart-discovery

Please note that we can skip the Smart Discovery popup and directly launch Smart Discovery using the function SmartDiscovery.generateUrlToBuildStory(), which returns a hyperlink.

***

In the subsequent blog, we will learn about Date and Time Range functions and how to use them.

Reach out to us here today if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud – Application Design Series: 12 – Smart Discovery appeared first on Visual BI Solutions.

SAP Analytics Cloud – Application Design Series: 13 – Date & Time Range Functions

$
0
0

In the previous blog of this series, we learned how to launch Smart Discovery from analytic applications in SAP Analytics Cloud. In this blog, let us look at Date and Time Range functions and how to use them.

Analytic Applications support passing single or multiple members for filtering but do not support ranges. However, for Date dimensions, you can pass the time range as a parameter. Here below you can see various time range filters in action.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

Creating a Time Range

TimeRange is a parameter type that can be used to filter dimensions of type Date.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

A Time Range can be created using the function TimeRange.create(), which takes three parameters namely TimeRangeGranularity, Start Date and End Date.

Syntax: TimeRange.create(TimeRangeGranularity, Start Date, End Date)

TimeRangeGranularitycan be of type Year, Halfyear, Quarter, Month, Day, Hour, Minute, Second or Millisecond as seen below.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

For example, the following script can be used to create a month range between two dates.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

TimeRange.create(TimeRangeGranularity.Month, new Date(2017, 12, 27), new Date(2019, 3, 19));

By simply changing the granularity type you can get other time ranges based on year, quarter, day, etc.

 

Time Range Functions

Apart from the create function for Time Range, you can also create time ranges using three more functions.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

The syntax for these functions are as follows:

  1. Week Range – createWeekRange (start year, start week, end year, end week)
  2. Month Range – createMonthRange (start year, start month, end year, end month)
  3. Year Range – createYearRange (start year, end year)

For example, to create a range of months we can directly use the Month Range function as in the following script:

TimeRange.createMonthRange(2018, 2, 2019, 3);

Similarly, other time ranges can also be created by using these functions with appropriate parameters.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

Dynamic Time Range

Let us see how to get the current date, month, quarter and year for dynamic filtering. You can generate the current (system) date using Date functions with Date.now() as the parameter. Once you have the local variable of type Date you can extract other information like a month, year, etc.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

Since there is no function to extract quarter, you can use the Math function and extract quarter from current_month.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

Now you can use these local variables in the TimeRange function to achieve dynamic filtering. Note that getMonth() function returns month as a value between 0 – 11 for January to December.

Also, note that you need to assign an appropriate hierarchy to the Date dimension for the Time Range filter to work properly.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

 

The following example shows how to filter a chart to the last two quarters dynamically, using scripting. First, a TimeRange variable of granularity Quarter is created (QuarterFilter). Then the variable is used in the setDimensionFilter function as a parameter.

sap-analytics-cloud-application-design-series-13-date-time-range-functions

var current_date = new Date(Date.now());

var current_year = current_date.getFullYear();

var current_quarter = Math.floor(current_date.getMonth()/3);

if(current_quarter<1){

                var passed_quarter = 3;

                var passed_year =  current_year-1;

}

else{

                passed_quarter = current_quarter-1;

                passed_year = current_year;

}

var passed_quarter_starting_month = (passed_quarter*3);

var passed_Quarter_date = new Date(passed_year, passed_quarter_starting_month, 1);

var QuarterFilter = TimeRange.create(TimeRangeGranularity.Quarter, current_date, passed_Quarter_date); 

Chart_1.getDataSource().setDimensionFilter(“OrderDate”, QuarterFilter);

* * *

Reach out to our team here if you are interested to evaluate if SAP Analytics Cloud is the right tool for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud – Application Design Series: 13 – Date & Time Range Functions appeared first on Visual BI Solutions.

Email Notifications in Azure

$
0
0

Email notifications are crucial to ensure the health of your Azure environment and processes. Azure supports email notifications to send alerts that monitor its resources and events. There are numerous methods in Azure to send email alerts. Below are a few common approaches:

  • Azure Logic App
  • Azure Automation Runbook

 

Azure Logic App

Azure Logic App is a cloud-based service which supports a feature to send email notifications through multiple services such as Outlook and Gmail.

 

Use Case

A simple approach to send an outlook mail when a blob in the storage account is added or modified:

  1. Create a logic App resource within a resource group.
  2. Add a Trigger event as “When a blob is added or modified (properties only)” from Azure Blob Storage event
  3. Add an action “Send an email” from Office 365 Outlook event and create a connection with your outlook ID
  4. Specify the following Parameters:
    • Required Parameters
      • Body
      • Subject
      • To
    • Optional Parameters
      • Attachments
      • BCC
      • CC
      • From (Send As)
      • Importance
      • Is HTML
  1. Dynamic Content can be provided to parameters based on the output of previous steps

 

Logic App Designer

email-notifications-azure

 

Azure Automation Runbook

Azure Automation Runbook is a cloud-integrated user interface to develop PowerShell scripts. Using an SMTP configuration from a PowerShell script we can send email alerts to the users.

SMTP Configuration for outlook:

SMTP server name smtp-mail.outlook.com
SMTP port 587
SMTP encryption method STARTTLS

 

Use Case

Email alerting when a pipeline run has failed with error details:

  1. Build an Azure automation runbook with SMTP Configuration for outlook server with the below script
  2. Configure a webhook for the runbook (Copy the webhook URL)
  3. In Azure Data Factory,
    • Build a Pipeline with a required activity
    • Handle Failure event with a web activity

email-notifications-azure

 

    • In settings configure the properties as shown below.
    • In this demo, Pipeline name, Activity start time, Activity status, Error and Error description is captured. The recipient and subject are also passed through the pipeline. If there are multiple recipients, pass the values in parenthesis. For Reference,(‘XXX@XYZ.com’, ‘YYY@XYZ.com’)email-notifications-azure

Note: Failure Event is precise to a single activity. So, if there are 2 or more activities in your pipeline then for each activity, Failure events should be handled individually because Dependency between activities is Logical AND.

4. Whenever pipeline activity gets failed, ADF will invoke azure automation runbook through webhook and send error details.

  • Create an automation account and build a PowerShell runbook with the below code.
  • Update your Azure credentials in the automation account and pass the credential name in the code

5. Mail body can contain normal text or HTML content. Here the body is crafted with HTML along with a hyperlink

6. The following mail is produced by the above runbook whenever a pipeline activity fails

email-notifications-azure

Subscribe to our Newsletter

The post Email Notifications in Azure appeared first on Visual BI Solutions.


CDS Views – Introduction

$
0
0

This is an introductory blog on Core Data Services and CDS Views. You can find our other blogs on CDS here.

SAP HANA is more than just a database system. Hence there is the need for a change in the programming approach, so as to fully utilize the advantages that an SAP HANA System offers. The crux of HANA in terms of data processing is to take the processing to the database to make it faster and to eliminate any lag due to network etc. Data-intensive operations can be performed in the database layer itself using the Code to Data paradigm.

cds-views-introduction

 

Code to Data approach reduces system bottlenecks, increases calculation speeds, & reduces the movement of data from one layer to another. Core Data Services is a method for Code Push Down. Usually, in cases where native execution is desired, Open SQL is the first instrument for ABAP developers for pushing down data-intensive processing to the database layer, but, if re-use of data models or features such as Union and Association are required, then Core Data Services must be used.

Core Data Services is a collection of domain-specific languages (DSLs) and services for defining and consuming semantically rich data models. Entity definitions and the semantic relationships between entities can be developed using CDS. CDS artifacts get stored as DDIC objects and can be used in ABAP programs and can be used as source/target for DML Statements. DDL sources and CDS entities are managed by ABAP, hence the entire lifecycle of CDS entities are controlled by the ABAP Change and Transport System (CTS).

The elements and artifacts that can be created using CDS include:

  • Views
  • Tables (Entities)
  • Associations
  • Annotations
  • User Defined data types
  • Contexts

The metamodels for an application that requires code pushdown can be built using CDS views. A CDS view is more powerful than an SE11 view as it describes an open source DDL for building a meta-model repository involving database tables, database views, functions, and data types. There are ABAP CDS views and HANA CDS views. ABAP CDS views are database independent whereas HANA CDS views are database dependent. The CDS objects created using HANA CDS is not controlled by ABAP dictionary and hence cannot be consumed in ABAP Programs or Open SQL. HANA CDS requires the entity type definition for DDIC table whereas ABAP CDS does not require it & hence duplication in the CDS layer is avoided.

CDS views are the way of the future and slowly all delivered content within SAP is being transitioned into CDS views.

Learn more about Visual BI’s SAP HANA offerings here.

Subscribe to our Newsletter

The post CDS Views – Introduction appeared first on Visual BI Solutions.

CDS Views with Key Column

$
0
0

In this blog, we will cover some use cases and OData exposure of CDS Views created with Key column. You can find our other blogs on CDS here.

Code Push Down in ABAP applications is enabled by CDS Views, which are an extension of the ABAP Dictionary. ABAP CDS provides a framework for defining and consuming semantically rich data models on the central database of the application server, AS ABAP. ABAP Dictionary manages the Data Definition Language (DDL) and the Data Control Language (DCL) based on which the data models are defined. CDS Views can be created with or without a key column. Let’s look at some use cases for views created with a key column.

 

Use Cases for Creating View with Key

There are various scenarios where a view needs to be created with Key columns.

1. Master Data

Aggregating the results across multiple dimensions requires an Analytical Model. A basic view is created on the Master data from the DDIC Tables/Views. As it is fetching the data from database table there should be no data redundancies as the view is created with the Key Column specified.

@ObjectModel.dataCategory: #DIMENSION is the annotation that is used to represent the Master data.

2. Foreign Key Association

Relationships between entities are defined by Associations. The Source and Target entities relate to each other with the help of the Foreign Key. The Primary Key of the Source entity (Representative Key) acts as the Foreign Key for the Target entity.

3. Value Help

Value Help provides the possible correct values of an input field to the user. This Value Help can be established with the Foreign Key Association in CDS.

4. Hierarchy

Hierarchy in CDS can be created on the view with dataCategoryas DIMENSION. The ‘Key’ field is mandatory for specifying the Hierarchy.

 

OData Exposure of View with Key

OData (Open Data Protocol) is the best way of consuming RESTful APIs. ABAP CDS view can be exposed as OData Service in different ways. In our scenario, OData Exposure is done using the Annotation @OData.publish: true.

CDS views can be exposed as OData regardless of whether a key column is present in the view or not. In this blog, we will cover the different methods of accessing the exposed data for a CDS view with a key column.

  1. Accessing the entire data using the Navigation Property, and
  2. Accessing Individual Records

 

Creating the CDS View with Key Column and its OData Exposure

For this Illustration, the CDS view is created by consuming BSEG Table with three input parameters. The Key is set and OData service is enabled.

Key: BELNR

Input Parameters : P_CurDate , P_ACC, P_Cur

cds-view-with-key-column

 

OData is generated and the metadata contains two EntityTypes that are created, one for the Key and Parameters and the other EntityTypefor the other columns.

https://<ServerName>:<PortNo>/sap/opu/odata/sap/Y_FI_BSEG_WITH_KEY_CDS/$metadata

cds-view-with-key-column

 

The first EntityTypecreated contains the Key column BELNR and the three Input Parameters.

cds-view-with-key-column

 

Accessing the Exposed Data

As seen in the image below, Setis the Name of the NavigationProperty for the CDS View.

cds-view-with-key-column

 

Hence the entire dataset can be accessed using the OData Service given below

https://<ServerName>:<PortNo>/sap/opu/odata/sap/Y_FI_BSEG_WITH_KEY_CDS/Y_FI_BSEG_WITH_KEY(P_CurDate=’20171025′,P_ACC=’D’,P_Cur=’USD’)/Set

 

Accessing individual records

Individual records can be accessed by passing values for the Input Parameters & the Key as shown below: 

cds-view-with-key-column

 

https://<ServerName>:<PortNo>/sap/opu/odata/sap/Y_FI_BSEG_WITH_KEY_CDS/Y_FI_BSEG_WITH_KEYSet(P_CurDate=’20171025′,P_ACC=’D’,P_Cur=’USD’,belnr=’90000536′)

cds-view-with-key-column

 

We have now seen two distinct methods of accessing data exposed through OData when the view contains a Key column.

 

Learn more about Visual BI’s SAP HANA offerings here.

Subscribe to our Newsletter

The post CDS Views with Key Column appeared first on Visual BI Solutions.

Blending Datasets in SAP Analytics Cloud

$
0
0

There are two ways to combine data sources in SAP Analytics Cloud. One is by combining data sources at the model level at the time of model creation – the details of which can be seen in this blogThe other is by blending datasets in SAP Analytics Cloud, which we will look at in detail here.

SAP Analytics Cloud offers this key capability to blend multiple data sources into the same chart or table by linking or joining common dimensions between them. Any filters passed or selections made are reflected in all the linked components in the page. Also, for blended datasets the consolidated list of dimensions and measures from the two models are displayed in the Designer.

SAP Analytics Cloud now supports data blending between live data models and data models based on imported data stored in the SAC.

Blending between these different sources is supported – as seen in the image below:

blending-datasets-sap-analytics-cloud

 

For this blog, as an example, let us see how blending works between Live and Acquired data models.

Data Sources used:

  1. Live Connection – HANA
  2. Acquired data – BW

Create a new story by consuming these datasets in two different charts as shown below:

blending-datasets-sap-analytics-cloud

 

Linking Dimensions

Here are the steps to link dimensions:

1. Click on the ‘Link Dimensions’ option in the ‘More’ menu. Alternatively, you can find this option in the Data Source area of the Builder panel when you are creating or editing a chart.

blending-datasets-sap-analytics-cloud

 

The ‘Link Dimensions’ dialog box appears.

blending-datasets-sap-analytics-cloud

 

2. In this dialog box, you must link the primary model on the left to the secondary model on the right. Expand the drop-down list in the left pane and click on the ‘Add model’ option. In the ‘Select a model’ dialogue box, select the model ‘HBS_TEST’, and click on ‘Set’ to close the dialog box.

blending-datasets-sap-analytics-cloud

 

After linking, the dimensions and measures from both models (linked) will appear on the right of the panel.

blending-datasets-sap-analytics-cloud

Note: The Links that are created between data sources are available only in the stories and do not impact the underlying models.

 

There are a few options available to configure the ‘Link Type’. You can also define which dimensions are to be unlinked.

3. As shown below, there are three types of Linking based on data.

blending-datasets-sap-analytics-cloud

 

‘All primary data’ – Includes all data in the primary model, and only the corresponding matching data in the secondary model (similar to left outer join)

‘All data’ – Includes all data in the primary and secondary models (similar to full outer join)

‘Intersecting data only’ – Includes only matching data (similar to inner join)

 

Summary

The matrix below summarizes data blending possibilities between various types of models based on live data, acquired data, flat files, and non-SAP data.

blending-datasets-sap-analytics-cloud

 

Reach out to us here today if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post Blending Datasets in SAP Analytics Cloud appeared first on Visual BI Solutions.

Performance Enhancement for Live Connection in Tableau – Assume Referential Integrity

$
0
0

Using a live connection in Tableau for connecting to databases with millions of records poses a big challenge, Performance. Many performance enhancement techniques and features can be implemented to address this issue. In this blog we will cover one of the features, Assume referential integrity.

What is Assume referential integrity?

Enabling this option in Tableau creates joins in the back end only between the tables that contain the dimensions or measures used in the sheet.

Note: The option is useful only when inner join exists between the tables.

Let’s try to understand with an example. The data source used in this example is liquor sales dataset. Create a star schema for the above dataset and join the tables as shown below:

Performance Enhancement for Live Connection in Tableau - Assume Referential Integrity

Image 1

Start the performance recording and build sales by county bar chart. After the chart is built stop the performance recording

In our sheet we have utilized Country dimension from SS_LOCATION_IOWA_LIQOUR table and Bottle sold measure from SS_FACT_IOWA_LIQOUR table.

Performance Enhancement for Live Connection in Tableau - Assume Referential Integrity

Image 2

The query in the above image can be obtained from the workbook generated after stopping performance recording.

Even though only two tables are utilized to build the chart query in the back end will be run joining all the tables in the data source. This adds unnecessary and additional load in computing and executing the query.

We can avoid this by enabling assume referential integrity option from the data source tab.

Performance Enhancement for Live Connection in Tableau - Assume Referential Integrity

Image 3

After enabling the option, repeat the same steps as earlier. From performance recording workbook we get the following output:

Performance Enhancement for Live Connection in Tableau - Assume Referential Integrity

Image 4

After enabling assume referential integrity we can see from the above image that for the chart created only the relevant tables are joined, that is, only the tables that contain County and Bottle sold fields are joined. Thus, this option helps in improved performance by joining only the necessary tables when a live connection to a database is used.

* * *

Learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post Performance Enhancement for Live Connection in Tableau – Assume Referential Integrity appeared first on Visual BI Solutions.

Window functions in Tableau

$
0
0

Window functions are among the very useful inbuilt functions of Tableau. Comparative analysis is one of the key areas where window functions are helpful. There are many window functions like window_max, window_min, window_Avg, etc.

In this blog let’s look at an application of window functions

Scenario: Highlight the months with highest and lowest sales for easier understanding of sales trend.

Solution: Build sales by months bar chart (best practice would be to go for line chart but for easier understanding of window functions use bar chart).

Window functions in Tableau

Image 1

The challenge faced here is that the person analyzing the dashboard needs to spend significant amount of time in finding the month with highest and least sales. We can utilize window functions to address this.

Window refers to the area inside a chart/pane boundary.

The window function to be used is as follows:

Window functions in Tableau

Image 2

This expression checks whether sales of a month are the highest in the window. This process is repeated for every month.

Drag and drop the calculation onto color marks to get the following output:

Window functions in Tableau

Image 3

In the above chart, the months with the highest, least and other sales are highlighted in different colors to easily distinguish them. Thus, the user analyzing the dashboard can easily find the months with highest and least sales.

In case Year is added to the view automatically the months with the highest and least sales in each year are highlighted.

Window functions in Tableau

Image 4

But what if only the months with the highest and least sales in all four years need to be highlighted and not for each year. Let’s explore that in the next blog.

* * *

Learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post Window functions in Tableau appeared first on Visual BI Solutions.

Showing Tables in Tooltips in SAP Lumira Designer

$
0
0

This is part of the Tooltip and Data LabelCustomization blog series. In the earlier blog, we looked at displaying multiple measures in a tooltip. In this one, you will learn about showing tables in tooltips in SAP Lumira Designer using VBX tooltip editor pane. This editor allows you to include dimensions and measures in any order with the desired spacing, colors and highlighting.

For this example, we use sample retail sales data with products and their sales amounts. Copy the code given below and paste it in the tooltip editor pane. You could also write your own HTML code here. Replace all the dimension and measure names with your data. You could refer to the blog Displaying Multiple Measures in a Tooltip,for details on how to include dimensions and measures in the tooltip via the tooltip editor pane. The tooltip editor resolves all the spaces and new lines provided in the code. To avoid that, we have minified the code to remove the extra spaces and new lines. You could use some code beautifiers available online to edit the structure of the table. Ensure the code is minified before pasting the template.

 

<table width=”100%” cellspacing=”1″ cellpadding=”0″ border=”0″ align=”center” bgcolor=”#000000″><tr bgcolor=”#ffffff”><td colspan=”2″ align=”center”>   Current Dimension MemberText </td></tr><tr bgcolor=”#ffffff”><td> Cost AmountText</td><td>Cost Amount Value </td></tr><tr bgcolor=”#ffffff”><td>Forecast AmountText   </td><td>Forecast Amount Value </td></tr><tr bgcolor=”#ffffff”><td> Discount AmountText  </td><td>Discount Amount Value </td></tr><tr bgcolor=”#ffffff”><td> QuantityText  </td><td>  Quantity Value </td></tr></table>

 

The tooltip editor looks like the image below. You can see that the Dimension Text (which is the heading) has been highlighted to a bold font. This is a simple HTML table template with rows and columns as dimension members and measures. You can edit this structure to have the desired no. of rows and columns to fit your data. This is a beautified version of the above code, to show the rows and columns of the table more clearly.

showing-tables-tooltips-sap-lumira-designer

showing-tables-tooltips-sap-lumira-designer

 

The tooltip with the HTML table is shown below. This can be used in scenarios where the user needs additional information which is not available in the chart data. In this chart, only the Sales Amount is provided in the series, whereas in the tooltip, we display other measures as well. The tooltip in the chart below gives additional information in a clear and concise manner.

showing-tables-tooltips-sap-lumira-designer

 

If you are interested in customizing your tooltips to display tables, you could register for a free trial and try it out with your own data.

Subscribe to our Newsletter

The post Showing Tables in Tooltips in SAP Lumira Designer appeared first on Visual BI Solutions.

R Visualizations in SAP Analytics Cloud Series: 1 – Dynamic Filtering

$
0
0

This series is about leveraging R Visualizations in SAP Analytics Cloud. In this blog, let us take a look at creating R widgets in an analytic application and performing dynamic filtering on them.

 

Advantages of R Visualizations in SAP Analytics Cloud

R visualizations are great for creating plots used for exploratory data analysis. R being an open source language, it has countless libraries which allow developers to extensively customize charts starting from UI to the underlying data as per the reporting needs. It is highly flexible and easy to use. For more on R, visit this link.

R visualizations in SAP Analytics Cloud can interact with other SAC components just like any other standard chart. The image below shows a standard column chart in SAC and a column chart created using R.

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

Dynamic Filtering in R Visualizations

Let us see how to dynamically filter R visualizations. Here, a simple column chart is used. Shown below is the Dynamic filtering of R Visualization in action:

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

Data-source Acquisition

The input to R visualizations is a Data Frame. A Data Frame is simply like a table with rows and columns where the structure of the table is built based on a list of fields.

Create an R visualization on the canvas. In the Builder tab of the R visualization, input data (data frame) can be added using the Add Input Data option.

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

For example, add a generic sales data source as input data to the R visualization.

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

Add the Category dimension to the rows and Volume Sold (Liters) measure to the columns. You can see a preview of the selected rows and columns from the data source on the right side of the input data pane. Click OKto save the structure. Now the Data Frame (Liquor_Sales_Data) is ready to be used for the R visualization.

 

Adding R Script

After the initial Data Frame has been set up, use the Edit Script option from the Builder pane to set up R script to visualize the data.

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

Add the following script to create a column chart:

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

The script includes the library ggplot2as it is elegant and easy to use. You can use any library as required. You can learn more about ggplot2 here.

The Data Frame is assigned to a local variable dFrame. The function ggplot renders the column chart with geom_bar(),assigning Category to x-axis and Volume Sold (Liters) to y-axis using the ggplot2::aes()function.

The script pane also displays a preview of the code executed. The developer has the liberty to play around with R scripts here and view the results directly using the Execute command button. When done, click on the Apply button for adding the R visualization to the canvas.

 

Filtering R Visualization

Add a dropdown that would filter dimension Packs from the data model.

1. Configuring the Dropdown

To populate the members, include the following script in the onInitialization() function of the application.

Here the function getMembers() is used to get the list of packs. And the function addItem() is used within a loop to populate the members in the dropdown.

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

2. Dynamic filtering

Now add the following script in the onSelect() function of the DropDown to achieve dynamic filtering functionality.

Note that you need to get the Data Frame of the R widget using getDataFrame() before you access its data source using getDataSource(), whereas, in the standard SAC chart, the data source can be directly accessed using getDataSource().

Now you will be able to dynamically filter the R visualization.

r-visualizations-sap-analytics-cloud-series-1-dynamic-filtering

 

* * *

In the subsequent blogs, we will learn more about modifying the aesthetic properties of R visualizations in SAP Analytics Cloud.

Reach out to our team here if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post R Visualizations in SAP Analytics Cloud Series: 1 – Dynamic Filtering appeared first on Visual BI Solutions.


SAP Analytics Cloud Application Design is now GA!

$
0
0

The wait is over as SAP has announced the big release of SAC Analytic Application. Popularly known as SAC Analytic Application, SAC Application Design or SAC App Design. SAP Analytics Cloud Application Design is GA from version 2019.8.

 

SAC Analytic Application which could potentially be the enterprise application designing tool, allows you to utilize ad-hoc content and then take it one step further by customizing and configuring the behavior of UI elements with a set of specific script API elements. It is a Lumira Designer type application, but on the cloud, which also brings together in-built planning and predictive capabilities.

If you are an existing SAP Analytics Cloud user, you will immediately notice the similarities in functionality between the widgets and components in Analytic Application and Stories. The flexibility of these applications ranges from simple static dashboards with just a few charts and tables to complex and highly customized applications with custom layouts and interactivity options created with scripts that provide numerous options to browse and navigate data. This will be the go-to tool for developers as well as data consumers in the future.

SAC Analytic Application is maturing fast. With SAP moving from a bi-weekly to a quarterly release cycle, we can expect some great new features and updates in every release cycle. SAP’s roadmap for Analytic Application looks promising with a lot of improvements in data connectivity, flexibility in developing applications and also some intuitive collaborative features. Above all, the licensing is included as part of the SAP Analytics Cloud subscription itself. Looking forward to some exciting dashboarding using SAC Analytic Application!

 

We have already explored a lot of capabilities that SAC Analytic Application offers and have come up with a series of blogs, the links to which are provided below. We have a lot more interesting blogs on the way.

Stay Tuned!

 

For a summary of all our blogs related to SAP Analytics Cloud, please visit: https://visualbi.com/blogs/category/sap/sap-analytics-cloud/

More detailed blogs on SAP Analytics Cloud Application Design focused topics can be found below:

Topic Link to Blog
SAP Analytics Cloud – Application Design Series: 1 – Overview https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-overview/

 

SAP Analytics Cloud – Application Design Series: 2 – Creating Your First Analytic Application https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-creating-your-first-analytic-application/

 

SAP Analytics Cloud – Application Design Series: 3 – Leveraging Popups https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-leveraging-popups/

 

SAP Analytics Cloud – Application Design Series: 4 – Dynamic Visibility https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-dynamic-visibility/

 

SAP Analytics Cloud – Application Design Series: 5 – Introduction to Scripting https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-introduction-scripting/

 

SAP Analytics Cloud – Application Design Series: 6 – String, Math, Array, Date & Conversion Functions https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-string-math-array-date-conversion-functions/

 

SAP Analytics Cloud – Application Design Series: 7 – Global Variables https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-global-variables/

 

SAP Analytics Cloud – Application Design Series: 8 – Scripting API Overview https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-scripting-api-overview/

 

SAP Analytics Cloud – Application Design Series: 9 – Linked Analysis using Scripting https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-linked-analysis-using-scripting/
SAP Analytics Cloud – Application Design Series: 10 – Dynamically changing Measures and Dimensions https://visualbi.com/blogs/sap/sap-analytics-cloud/application-design-series-dynamically-changing-measures-dimensions/
SAP Analytics Cloud – Application Design Series: 11 – Enable Explorer using Analytic Functions https://visualbi.com/blogs/sap/sap-analytics-cloud/sap-analytics-cloud-application-design-series-11-enable-explorer-using-analytic-functions/

 

SAP Analytics Cloud – Application Design Series: 12 – Smart Discovery https://visualbi.com/blogs/sap/sap-analytics-cloud/sap-analytics-cloud-application-design-series-12-smart-discovery/

 

SAP Analytics Cloud – Application Design Series: 13 – Date & Time Range Functions https://visualbi.com/blogs/sap/sap-analytics-cloud/sap-analytics-cloud-application-design-series-13-date-time-range-functions/

 

 

Reach out to us here today if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud Application Design is now GA! appeared first on Visual BI Solutions.

6 Key Differences between SAP Analytics Cloud Application Design and SAP Lumira Designer

$
0
0

SAP Analytics Cloud (SAC) has been around for a while and it has primarily been a tool for cloud-based Data Discovery, especially for customers looking to seamlessly connect to their existing Cloud-based solutions. As of version 2019.8, SAP Analytics Cloud brings to customers the ability to also create Applications in the cloud in the form Application Design. This new feature comes as a part of existing BI licenses for customers and does not need to be purchased additionally.

6-key-differences-sap-analytics-cloud-application-design-sap-lumira-designer

Source: SAP

 

With the addition of application building capabilities, SAP Analytics Cloud now looks to offer the same capabilities as SAP Lumira Designer for those customers looking to use mostly cloud-based data sources. However, SAP Lumira Designer has been around for quite a while now and there are quite a few differences between SAP Analytics Cloud’s Application Design and SAP Lumira Designer. For the most part, these differences are negligible. However, it still is important for customers to understand what the key differences are, which is what we’ve outlined here.

 

1. Wider Variety of Data Sources

SAP Lumira Designer can natively connect to 3 types of Data Sources – SAP BW, SAP HANA, and Universes. For any other type of Data Source, it would require a Custom Data Source component to be built using the SDK.

6-key-differences-sap-analytics-cloud-application-design-sap-lumira-designer

Wider range of Data Sources to connect to for SAP Analytics Cloud’s Application Design

 

SAP Analytics Cloud’s Application Design can connect to any Data Source that SAP Analytics Cloud can connect to – it can connect to both Cloud and On-Premise Data sources that are either extracted or live (depending on the Data Source in question) as long as a “Model” is built on top of the existing Data Source. This gives Application Design quite a bit of flexibility.

 

2. Data Source Objects

SAP Lumira Designer allowed users to add objects called “Data Sources” to the application they were building. These objects or components allowed Designers to connect to data from BW Queries, HANA Views or Universes and could bind to different components within the canvas. The best part about Data Sources was that the same Data Source could be re-used by multiple components, and this opened the door to a lot of other performance optimization techniques.

6-key-differences-sap-analytics-cloud-application-design-sap-lumira-designer

SAP Analytics Cloud’s Application Design requires Models

 

SAP Analytics Cloud’s Application Design, on the other hand, does not yet have the concept of Data Sources. Instead, Charts and Tables added to the canvas can now only connect to Models created within the tool. Live Data from BW Queries or HANA Views will also have to have a Model created before they can be consumed by the visualizations. This takes away the ability to streamline performance on the application but does add the advantage of being able to create custom calculations on the fly when using imported data sources.

 

3. Data Binding

SAP Lumira Designer can bind the content within List Boxes and Dropdown menus (or quite a few other components for that matter) directly to Data Sources. So, if designers wanted a list of Products from the BW query they were using on their applications, it was simply a matter of binding the right property of their list box to the data source.

6-key-differences-sap-analytics-cloud-application-design-sap-lumira-designer

Data Binding options are not yet available on Application Design

 

In SAP Analytics Cloud’s Application Design, components such as Dropdown menus or Radio Button Groups (List Boxes are missing, for some reason), do not have the ability to bind directly to a Data Source. The items inside will need to be populated manually. However, as a workaround, it is possible to get a list of all items within the Data Source and load them into the component using Scripting. Since this is how Designer started out, it could be just a matter of time before the same ability comes to SAC Application Design.

 

4. Components

Having been around for a while, SAP Lumira Designer has a wider variety of components including Table components, Chart component, Filter components, Basic components for interactions and a robust set of Container components for different types of applications and target devices. These are, of course, standard components. With a very robust SDK, SAP Lumira Designer also has plenty of aftermarket components available too.

6-key-differences-sap-analytics-cloud-application-design-sap-lumira-designer

Limited list of components owing to the newness of the tool

 

SAP Analytics Cloud’s Application Design is newer to the market and is still building on a limited set of components. Right now, the tool has very basic components such as Charts, Tables and a few basic components for interactions. Mapping capabilities are missing as well. However, it also supports R-based Visualizations, which is quite an addition, and more components can definitely be expected going forward.

 

5. Scripting

SAP Lumira Designer uses its own scripting language known as BIAL scripting. This is a derivation from JavaScript and closely resembles Microsoft’s VBScript. The type of scripting that can be used is limited to merely events and methods defined within the tool, and it is not possible to integrate external scripting languages with Lumira’s. However, it is still possible to build custom components which can compile other scripting languages separately.

6-key-differences-sap-analytics-cloud-application-design-sap-lumira-designer

Flexible Scripting options for SAP Analytics Cloud’s Application Design

 

At the outset, SAP Analytics Cloud’s Application Design seems to share the same scripting language as that of SAP Lumira Designer. While the number of methods and events available are far more limited considering the newness of the tool, it is possible to use a freer form of scripting closer to JavaScript, including constructs like For and While loops, Switch-Case statements and more. Moreover, R-based analyses are already supported through components within the tool. In this aspect, Application Design shows a lot more promise.

 

6. Integration with External Applications

SAP Lumira Designer has the capability to integrate with existing SAP BI solutions such as Web Intelligence reports, BEx web queries or even jump to other Analysis Office applications. While the SDK allows for a degree of added functionality, including the ability to embed web pages within Designer, the ability to interact with these web pages could be a bit limited.

SAP Analytics Cloud’s Application Design, on the other hand, has been built to handle integration with external applications such as web pages, business applications and even has the capability for closed-loop scenarios. The tool is built with native capabilities to handle OData calls to transaction systems and bi-directional communications with other business applications as well. And of course, it can seamlessly navigate to existing stories, explorer and smart functions within the SAP Analytics Cloud environment.

———————

It hasn’t been too long since SAP has come out with Application Design for Analytics Cloud. Therefore, it should come as no surprise to customers that the capabilities are a little limited at the moment – this is, after all, version 1.0 of Application Design for the cloud. However, even version 1.0 here seems a bit more mature than version 1.0 of SAP Lumira Designer (initially known as SAP Design Studio).

The seamless integration with the rest of the Analytics Cloud capabilities and several other key capabilities such as the ability to integrate R-based analyses, perform a closed-loop operation with transaction systems and seamlessly integrate with business applications shows great promise for SAP Analytics Cloud. With agile quarterly release cycles, the Application Design feature is likely to mature very quickly, leaving customers with one important consideration – will customers be able to move to the cloud fast enough?

For a summary of all our blogs related to SAP Analytics Cloud Application Design, please visit: https://visualbi.com/blogs/category/sap/sap-analytics-cloud/

Reach out to us here today if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post 6 Key Differences between SAP Analytics Cloud Application Design and SAP Lumira Designer appeared first on Visual BI Solutions.

Best Practices for Installing Tableau on VMware

$
0
0

VMware has become one of the popular virtualization tools and as more organizations opt for virtualizing their infrastructure, it is common for organizations to use virtualization environment for running their Tableau Services. With the advancements in virtualization to make it more like the actual hardware, there are few considerations to be taken for effective utilization of the deployment.

 

Installing Tableau on VMware

The following are the key best practices to be followed when planning your Tableau instances on VMware. In addition to this, the general guidelines from Tableau and VMware should also be adhered to for the Tableau setup and VMware implementation.

 

Latency Sensitivity

Tableau is extremely latency sensitive and prefers very low latency or jitter when performing complex data visualization or analysis. Latency settings greater than 40% is recommended. Network latency between nodes should be less than 10ms with single hop contiguous network access.

 

Dedicated vCPU and RAM

It is recommended to have dedicated vCPU and RAM allocation. Tableau does not work well with burst capacity especially during peak resource utilization.

 

Disk

Tableau recommends having a high IO for performance. 250MB/s Write & 1GB/s Read. Tiered SAN is recommended for Tableau disk to achieve greater performance.

 

CPU Shares

Tableau is resource intensive and it is recommended to not place it along with other VM’s which have the same shares. Tableau Server must be given higher shares in the resource pool. Each node in the Cluster should have the same shares.

 

Reservations

Tableau recommends having reserved resources. The Reservation can be split across nodes and can be used along with shares for effective utilization. Minimum resources should be reserved.

 

VM Migrations

VMware vMotion can be used to manage VM Migrations and handle failover in the Tableau VM’s.

 

Host Cluster

For high availability, it is recommended to install Tableau on DRS-enabled Clusters.

 

VMware Tools

It is recommended to install VMware Tools to improve the performance of the Virtual Machine.

 

Worker Nodes

The worker nodes should also be added to a DRS-enabled cluster to enable live migrations and failover.

 

MAC address

The MAC address should be set to Static to avoid instances becoming unlicensed when migrated to different hosts. Tableau licenses depend on MAC address and UUID of the hardware, and changes to those during migration would make them unlicensed.

 

Boundary of Migrations

The boundary of migrations is recommended within a DRS-enabled cluster. Latency, Performance, and Hardware / MAC Address change should be considered for cross vCenter, cross vSwitch, and long-distance migrations.

 

Virus Scan

Security Scan and the in-memory engine can impact performance. Tableau applications should be excluded, or scans should be scheduled during non-peak hours. Also, the in-memory scans should be disabled.

 

We will be discussing the best practices for installing Tableau on Azure in our subsequent blog. Click here for more blogs on Tableau.

Subscribe to our Newsletter

The post Best Practices for Installing Tableau on VMware appeared first on Visual BI Solutions.

Best Practices for Installing Tableau on Azure

$
0
0

In our previous blog, we discussed the best practices for installing Tableau on VMware. Now let us take a look at the best practices for installing Tableau on Azure.

Azure is one of the widely used Cloud Providers and a lot of companies use Azure for their Computing as well as Data Processing needs. Running Tableau on Azure gives a lot of benefits like Scalability, Performance and easy access to the Azure Data Sources. Tableau also supports integration with Azure Active Directory for SSO.

Tableau has a bunch of extensive resources on running in Azure. You can access them at https://www.tableau.com/solutions/azure. In this blog, we will talk about some of the best practices and configurations for installing Tableau on Azure.

 

VM Type

Running the right VM Type is very import as Tableau is a Latency sensitive application and requires Memory and CPU resources for effective processing. Tableau recommends the following VM types:

  • D Series and DS Series, suited for high-performance applications like Tableau
  • D16s_v3 or DS13_V2 instances recommended for production
  • DS12_V2 for POC / Development

 

Deployment

Tableau has a Marketplace item in Azure which you can use to quickly spin up a pre-defined Tableau Server. This can be used for quickly getting tableau for a POC. For Production, it is recommended to create a separate VM and install Tableau on it.

 

Dedicated vCPU and RAM

It is recommended to have dedicated vCPU and RAM allocation. Tableau does not work well with burst capacity especially during peak resource utilization. The VM type D and DS series provide the allocated memory.

 

Disk

Tableau recommends having a high IO for performance. It is recommended to have separate volumes for OS and Tableau installation, 30-50 GB volume for the operating system and 100 GB or larger volume for Tableau. Premium Storage P20 is recommended for faster IO Performance.

 

Network

Tableau requires a high Network bandwidth to Azure so that the performance of the reports is faster. It is recommended to enable Accelerated Networking on the VM. For dedicated and Secure communication ExpressRoute is recommended.

 

Load Balancer

If Load Balancer is required for the Clustering, then the Azure Load Balancer can be used for Load balancing.

 

Click here for more blogs on Tableau.

Learn more about Visual BI’s Microsoft Azure offerings here.

Subscribe to our Newsletter

The post Best Practices for Installing Tableau on Azure appeared first on Visual BI Solutions.

Parameterize Connections in Azure Data Factory

$
0
0

Azure Data Factory is a managed cloud service that is built for complex data orchestration processes and hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects. In this blog, we will demonstrate how to parameterize connection information in a Linked Service, which will enable the passing of connection information dynamically, and eliminate the need to create multiple Linked Services for accessing servers with many databases.

 

Linked Services

Linked Services can be thought of as connection strings, which define the connection information needed for Azure Data Factory to connect to external data sources.

For example, if you want to parameterize the Azure SQL database connection in Azure Data Factory you would start by creating a new Linked Service and then add the dynamic content in the Linked Service property that you want to parameterize like Server Name, Database Name or the User Credentials.

 

Create Parameters

Linked Service enables you to parameterize connection information so that values can be passed dynamically.

parameterize-connections-in-azure-data-factory

 

Add Dynamic Content

  • Choose the ‘Enter manually’ option
  • Click on ‘Add Dynamic Content’ under each input box for mapping the parameter

parameterize-connections-in-azure-data-factory

 

After creating the Linked Service, you will need a dataset to invoke the dynamic content in the Linked Service that accesses a table or query in the database. For example, if you want to move data from Azure SQL Database to Azure SQL Data Warehouse, then you would need to create two datasets one for the source Azure SQL Database and the other one for the sink Azure SQL Data Warehouse.

 

Create Datasets

  • Invoke the Linked Service which we already created and map the Linked Service to the Dataset for accessing the data dynamically, without creating another Linked Service
  • Image (1)represents the Dataset accessing the Azure SQL Database“sqldb” by passing values into the parameter fields
  • Image (2)represents the Dataset accessing the Azure SQL Data Warehouse“sqldw” by passing values into the parameter fields
  • All the values are parameterized, so we can dynamically pass values into Server Name, Database Name, Username and Password
  • To illustrate our scenario, only database names are dynamically passed and the data is accessed from the database
parameterize-connections-in-azure-data-factory

Image (1) – Accessing ‘sqldb’ dynamically through parameters

 

parameterize-connections-in-azure-data-factory

Image (2) – Accessing ‘sqldw’ dynamically through parameters

 

Now let us see how to use the parametrized connection in a pipeline.

 

Create a Pipeline

Create a pipeline and invoke the dataset in the source or sink based on your needs. Once the dataset is selected, it will prompt for the parameters you created in the dataset.

parameterize-connections-in-azure-data-factory

 

  • Pass the parameter value from the pipeline into the dataset
  • The same dataset can be used in different pipelines to access different databases or servers
  • In the pipeline below, the same Dataset and Linked Service is used, however, it is extracting data out of the SQL Data Warehouse, sqldw

 

parameterize-connections-in-azure-data-factory

 

We now have a pipeline that uses a single Linked Service for connecting to multiple databases within a single server. This way of implementation of Linked Services will reduce overhead and improve the manageability of your data factories.

 

Learn more about Visual BI’s Microsoft Azure offerings here.

Subscribe to our Newsletter

The post Parameterize Connections in Azure Data Factory appeared first on Visual BI Solutions.

Viewing all 989 articles
Browse latest View live