Quantcast
Channel: Visual BI Solutions
Viewing all 989 articles
Browse latest View live

Dimensional Node- Adding Drilldowns to a Value Driver Tree’s Node

$
0
0

Value Driver Trees are a good tool to use for modeling business scenarios and allow you to visualize the flow of a business or its operations. More importantly, these visualizations allow analysts to map their key operational processes to their most important business KPIs by establishing logical and/or mathematical relationships between them. Thus, they allow users to drilldown and slice & dice data from the topmost level of their business pyramid and find insights that may not be obvious or even represented on a traditional report or dashboard. This could also include simulations and what-if scenarios.

One of the key elements for a good value driver tree is the flexibility to map any data that a business analyzes into the tree form. While traditionally, Value Driver Trees are only expected to show the operational drivers and business KPIs, a truly flexible model needs to allow a little more slice and dice capabilities – such as the ability to drilldown into one of the drivers and take a look at the key factors that contribute to that driver’s value.

Visual BI’s Value Driver Tree for SAP Lumira Designer brings this capability to end users, allowing them to map drilldown data to a particular node on the value driver tree using what is called a “Dimensional Node”.

 

What is a Dimensional Node?

The best way to explain the concept of a “Dimensional Node” is an example. What we could do here is to take an example of a typical process industry – such as a Copper Mining business – in which any of the operational drivers could be a constituent of multiple factors. First, here’s the tree itself:

dimensional-node-adding-drilldowns-value-driver-trees-node

 

In our scenario, we are modeling the Value Drivers that contribute to the Net Profit of the organization, and you can see that it’s broken down into Revenue and Cost elements. For more simplicity, the focus here is on the highlighted path that follows the breakdown of Total Costs. In this example, one of the key elements that dictate costs is the Mill Throughput, and one of the key factors deterring throughput are Internal Delays.

dimensional-node-adding-drilldowns-value-driver-trees-node

 

A term as generic as “Internal Delays” may be acceptable for quick What-If scenarios that high-level business users may want. However, in certain cases, an analyst taking a closer look or following a more realistic simulation path may seek more insights into the term – they may want to take a look at what exactly “Internal Delays” mean, and how we arrived at the figure shown.

To give this ability to a business user, the modeler can incorporate this particular node as a “Dimensional Node” which basically gets all of its data from a different data source, if needed, but, can also be aggregated and included as part of the Value Driver Tree as a single node. An abstract of this concept has been represented in the figure below:

dimensional-node-adding-drilldowns-value-driver-trees-node

 

In this example, the scenario described has been modeled by first creating a “Dimensional Node” component within the Value Driver Tree application and using a BW data source to get the Internal Delay detail data into the component. Subsequently, this Dimensional Node component has been consumed within the Value Driver Tree model. When an end user wants to drilldown to see the detail data, all they will need to do is click on the “Table” icon right next to the node. They are presented with a popup window showing them the drilldown details, which would look something like this:

dimensional-node-adding-drilldowns-value-driver-trees-node

 

If required, a comparison series for each of the constituent dimension members can also be displayed on a separate tab without much effort – this simply needs to be included as part of the data source.

 

When does use of a Dimensional Node instead of child nodes make sense?

When running What-if scenarios using the Value Driver Tree, a concise view of the most important drivers is necessary to minimize the scrolling required to follow the impact of simulations across multiple nodes. When including child nodes that are inconsequential to a high-level user, this purpose may be defeated, making the tool a little more cumbersome to use. This, of course, is merely one part of the problem.

Another important reason for this is, if the number of constituent members of a driver is large, there would be too many child nodes, thereby cluttering the tree when it is not really necessary. For example, let’s say that one of the nodes being modeled is the total sales for all products. If one were to include all the products and their individual sales as child nodes, this could end up adding dozens or hundreds of extra nodes to the tree. In such cases, the Dimensional Node is a concept that can introduce a lot of flexibility into the Value Driver Tree.

 

Reach out to us today to learn more about Visual BI’s Value Driver Tree software for SAP Lumira Designer.

Subscribe to our Newsletter

The post Dimensional Node- Adding Drilldowns to a Value Driver Tree’s Node appeared first on Visual BI Solutions.


Visualize ‘Top N & Others’ in SAP Lumira Designer using Ranking Measure Sort

$
0
0

In our previous blog, we outlined a way to achieve visualization for Top 5 & ‘Others’ through Query Designer and Lumira Designer (formerly known as BusinessObjects Design Studio). This approach is a great workaround to answer a very common request from users- provided that N (number of top/bottom values) is less than 20, and the ranking does not change dynamically based on users’ filter selection.

Please make sure you have read our previous blog before continuing.

If you have implemented the above-mentioned solution, you will notice that the top/bottom 5 customers are not sorted in the order of ascending/descending profitability. That is because when you run this script,

visualize-top-n-others-sap-lumira-designer-using-ranking-measure-sort

 

, the getMembers() method returns the top 5 customers in alphabetical order from the master table and then passes this to the Query 3 variable. For our Top 5 customer to show up in ranked order, we need to pass the customer values to Query 3 in an ascending or descending order of its profitability. To do that, we will be following these steps:

  1. Get all the Profitability values from query to and store it in a float array. Then we need to rank this array in an ascending /descending order, based on our requirement. The index of the float value will then be sorted in the order we want.
  2. Get all the Customers members values and store in a text array (this will be in alphabetical order).
  3. Loop through this customer array and use getData() method to compare the data value with a value in the float array in step 1. If we get a match in value, the index of float value will be the new order position for our customer.

Here are the scripts to achieve that (in addition to the queries created with the above-referenced blog post)

 

1. Create a Global Script called CALCULATION, and a function called rank() under it with the following script:

The script will take a float number and a float array, then find a position for the float in the array to make the array sorted.

visualize-top-n-others-sap-lumira-designer-using-ranking-measure-sort

 

var len = floatArray.length;

var arrayCopy = [1.0]; arrayCopy.pop();

floatArray.forEach(function(element, index) {

arrayCopy.push(element);

});

if(len == 0){

floatArray.push(number);

}

else{

if(number < floatArray[0]){

floatArray = [number];

arrayCopy.forEach(function(element, index) {

floatArray.push(element);

});

}

else if(number > floatArray[len-1]){

floatArray.push(number);

}

else{

floatArray.forEach(function(element, index) {

var curIndex = index;

var n1 = element;

var n2 = floatArray[index+1];

if(number > n1 && number < n2){

floatArray[curIndex+1] = number;

arrayCopy.forEach(function(element, index) {

if(index > curIndex){

floatArray[index+1] = element;

}

});

}

});

}

}

return floatArray;

 

2. Create the following script to sort a given float array using the function rank() above.

visualize-top-n-others-sap-lumira-designer-using-ranking-measure-sort

 

var newArray = [1.0]; newArray.pop();

floatArray.forEach(function(element, index) {

newArray = CALCULATION.rank(newArray,element);

});

return newArray;

 

3. Here is the script to replace the final script in the previous blog – passing customers values to Query 3’s variables.

var topcustomers = DS_1.getMembers(“ZR_CUST”, 5);//Getting top 5 members from Query 2

var variables = [“ZKAR_VAR_C1″,”ZKAR_VAR_C2″,”ZKAR_VAR_C3″,”ZKAR_VAR_C4″,”ZKAR_VAR_C5”]; //All variables for Query 3

var customerKeyRanked = [”]; customerKeyRanked.pop(); //Create a place holder array.

var profitValues = [1.0]; revenueValues.pop();

topcustomers.forEach(function(element, index) {

profitValues.push(DS_1.getData(“”,{“(MEASURES_DIMENSION)”:<Technical Key of Profitability>,”ZR_CUST”:element.internalKey}).value);

});

var profitValuesRanked = CALCULATION.sortFloatArray(revenueValues);//Sort all profitability values in the array.

topcustomers.forEach(function(element, index) {

var key = element.internalKey;

var profit = DS_1.getData(“”,{“(MEASURES_DIMENSION)”:<Technical Key of Profitability>,”ZR_CUST”:key}).value;

 

profitValuesRanked.forEach(function(element, index) {

if(element == revenue){

customerKeyRanked[index] = key;

}

});

});

 

//Pass sorted customer member values to the variable

variables.forEach(function(element, index) {

APPLICATION.setVariableValueExt(element, customerKeyRanked[index]);

 

Now, you will notice that the top/bottom 5 customers are sorted in an ascending/descending order.

Subscribe to our Newsletter

The post Visualize ‘Top N & Others’ in SAP Lumira Designer using Ranking Measure Sort appeared first on Visual BI Solutions.

Understanding Level of Detail Expression (LOD) – Include

$
0
0

Level of Detail Expressions (LOD) are very versatile and flexible. It enables users to get deeper insights into data. Understanding LOD can be a bit tricky. In this blog, we will be focusing on understanding the concept of Include LOD and its benefits.

What Include LOD does?

Include LOD enables calculations to be computed for dimensions present in the view (Rows, Columns and Marks) along with the dimensions not present in the view. Why would we require computations for dimensions that are not present in the view? The following example will help in understanding the necessity.

The dataset used for the following exercises is Sample-Superstore (2018).

Let’s try to find out Average Sales in every State.

Build the following chart showing Average Sales by each State.

Understanding Level of Detail Expression (LOD) – Include

Image 1 – Chart showing average sales

The challenge is to find the average sales in each state per Customer. For achieving this if we add Customer Name dimension in our View, then existing view gets distorted as shown below:

Understanding Level of Detail Expression (LOD) – Include

Image 2 – Including customer name distorts view

We should be able to find the average sales in each state per customer without adding Customer Name dimension to the view.

This is where Include LOD helps.

Understanding Level of Detail Expression (LOD) – Include

Image 3 – Using LOD – Include

The above expression tells Tableau to take into consideration Customer Name dimension along with State Dimension, which is already in the view, when calculating Average Sales even though Customer Name dimension is not present in the view.

Applying the above expression, we get the following output:

Understanding Level of Detail Expression (LOD) – Include

Image 4 – Result after LOD – Include

We can verify the results using another view. Build a view with Dimensions and Measures as follows:

Understanding Level of Detail Expression (LOD) – Include

Image 5 – Average sales by State per customer for comparison

Here both State and Customer Name dimensions are used. Therefore, the Average Sales is computed per customer for each State.

Go to Analysis->Totals->Add All Subtotals.

Analysis->Totals->Total All Using->Average.

This gives an output showing Average Sales per Customer for each state.

Comparing the Average Sales value for California in Image 4 and Image 5 we find the values are same.

Therefore, using INCLUDE LOD, Average Sales was computed for both State and Customer Name even though Customer Name was not present in the view.

LOD helps in solving many complex questions. We will be looking into other LOD expressions – Fixed and Exclude in subsequent blogs.

* * *

Learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post Understanding Level of Detail Expression (LOD) – Include appeared first on Visual BI Solutions.

Fundamental Visual Design Rules to Design a User-Friendly Interface

$
0
0

What is an intuitive interface?

A user interface of a report, dashboard or application starts with the visual presentation. Therefore, the first interaction a user has with an interface is visual communication. Furthermore, today’s BI reports are evolving into more dynamic tools, which can offer users more flexibility to consume and interact with the data. Unlike static reports, these dashboards and reports can create a flow of interaction that closely aligns with the user’s work process. Both self-service tools (such as Power BI and Tableau) and IT-authored tools (Qlikview and Lumira Designer) offer rich capabilities for interactivity (such as filtering, bookmarking or drill-down). With power comes responsibilities, hence, these capabilities of BI tools give rise to more exciting challenges while designing an intuitive user-interface. In addition to visual design, our toolbox also includes interaction design, information architecture and writing. To design an intuitive interface, let’s first clarify what we want to achieve with our UI:

  • Communicate clearly the content and functionalities of the dashboard. The user must understand what the interface does.
  • Make information that users need, accessible.
  • Facilitate users’ workflow and help them achieve their goal

To achieve the above objectives, here are the basic visual design guidelines you can follow:

 

Create a Strong and Clean Layout

A layout is the overall structure and arrangement of elements in an interface. It is also the first design element to make an impression on users’ visual perception. A good UI helps them quickly scan the interface, understand the composition of the dashboard and know where exactly to look for the information they need. A good layout needs to clearly differentiate different functional areas of the dashboard with visual elements such as boundaries or background color.

fundamental-visual-design-rules-to-design-user-friendly-interface fundamental-visual-design-rules-to-design-user-friendly-interface

The common components of a dashboard interface include header and navigation or utility bar, filter panel, utility panel and visualization area. It is important to consistently arrange similar filtering controls and utilities together so that users can quickly find and perform the actions they need.

The example on the bottom is an enhanced design from the one on top. In this example, we create a strong distinction between different functional areas of the dashboard by using contrasting background colors.

 

Chunking Elements

When your dashboard has a lot of information, it can look quite cluttered. Chunking is to group elements that share the same content or functionality. Because we have the tendency to perceive grouped elements as one, chunking helps declutter the presentation and make it easier for users to consume information. Furthermore, grouping similar elements together let the user easily scan the overall structure of the content at first glance, while also letting them focus on one type of content at a time.

fundamental-visual-design-rules-to-design-user-friendly-interface

In the example above, elements are grouped by KPIs: Quantity, Revenue and Gross Margin. Instead of seeing 9 different visualizations, users will first see 3 groups of visualizations, and perceive each block as an individual element.

 

An alternative approach is to separate 3 groups by a subtle horizontal line and some generous padding. Here we can even remove the gray background and make the dashboard leaner.

fundamental-visual-design-rules-to-design-user-friendly-interface

 

Design with Consistency

All elements having the same role need to have consistent formatting or occupy the same space. This will help the users get familiarized with the functionality of the dashboard, quickly. Consistency needs to be maintained even for the very granular design elements, including those that are not easily spotted, since the slightest of difference in formatting can affect the readability of the report. Pay attention to the following:

  • Alignment:
    • Similar elements need to have the same alignment
    • Left alignment is recommended unless any other alignment is used to create a contrast
  • Font size and colors of the text category (such as chart title, subtitle etc.)
  • Icons: Similar icons need to offer the same functionality throughout the dashboard
  • Labels: Similar looking labels need to show and mean the same thing throughout the dashboard
  • Padding and Margin: Distances between two elements as well as the distance between an element and its container needs to stay consistent.

fundamental-visual-design-rules-to-design-user-friendly-interface

 

Use Negative Space with Intention

Negative space in design is defined as space not used to display any visual elements. However, it is not to be mistaken with the empty area that remains after all elements have been laid out. Negative space is an active element in Design that can serve the following purposes:

  • Increase the readability of content by giving users’ eyes a break
  • Separate different chunks of information and reduce clutter
  • Create contrast to help users focus on the content

Therefore, space is something that needs to be added more often, rather than be filled, to create a more readable report.

fundamental-visual-design-rules-to-design-user-friendly-interface

In the example above, we decreased the font size of the dashboard title, yet the text becomes easier to read because there is more space around the letter. At the same time, we added space between the donut chart and the column chart/area chart. This makes it easier to focus on each visualization.

 

Simplify, Simplify and Simplify

To make important information more prominent and readable, remove all the noise. All visual elements that do not serve any unique purpose should be removed. Just keep in mind that every pixel of ink or additional word will make users’ brain work a bit more than necessary. Try removing extra visual elements or words to see if it is still easy enough to navigate through the dashboard/report. The general rule is: If you don’t notice or get lost in the information without an element, it does not need to be there in the first place.

fundamental-visual-design-rules-to-design-user-friendly-interface

In this example, we have simplified the chart title to show the dimension only. Since we have given each KPI a separate section, the data point is being represented can be easily understood.

 

By following these fundamental Visual Design Rules, you will be able to create an effective user interface for a report, dashboard or application without much effort. Now, Design Away!

Subscribe to our Newsletter

The post Fundamental Visual Design Rules to Design a User-Friendly Interface appeared first on Visual BI Solutions.

Tips & Best Practices when using SAP BW and SAP HANA Connectors in Power BI

$
0
0

This blog gives 6 key areas to focus on when using the SAP BW and SAP HANA Connectors in Power BI. These tips and best practices can greatly improve troubleshooting and performance when building Power BI Dashboards based on SAP environments.

 

SAP BW Connector Overview

The SAP BW Connector for Power BI supports BW version 7.x and above. It allows Power BI to access SAP BW data sources. We do have options to connect to both the Application Server and the Message Server and the Connector works seamlessly for Import and Direct Query modes. We recommend the Import Query mode when dashboard performance is critical.

Version 2.0 and above of the BW Connector supports retrieval of millions of rows of data along with much-improved exception handling.

 

SAP HANA Connector Overview

The SAP HANA Connector helps us access SAP HANA data sources for Power BI report generation. The recommended approach is to use the Multidimensional connector while working with SAP HANA data sources. The SAP HANA Connector supports both Import and Direct Query Modes. Other available features relating to the Connector include SSL, multiple HANA input variables and SAML based Single Sign-On (SSO).

 

Tips and Best Practices

 

1. Query Folding

  • The idea behind this best practice approach is to push data processing to the data source server while working with a Power BI Report. The Query Folding option facilitates data processing on the SAP HANA server.
  • However, the addition of a custom column moves the data processing from the server side to Power BI. This can be seen in the screenshots below:

Query folding option is  present when fetching the data as-is from source:

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

The addition of a Custom column will convert from Query folding to processing the query within Power BI.

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

2. Column Selection

  • Select only the columns required for reporting.
  • Always avoid pulling in the complete column list, which will hamper performance.
  • Create a star schema using Import mode to facilitate ease of reporting.
  • Create a flat view or table with only the columns that are required for reporting.
  • Reduce the number of Joins in the underlying data source.

Star schema example:

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

3. Variables in Queries

  • Restricting the volume of data coming into the report will enable good performance. We can do this by using variables in the backend source queries used by the report.
  • Variables defined in queries are available to edit only in Power BI desktop for now and not in the Power BI service.

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

4. Handling Memory Issues

  • To handle out of memory issues, we could fine tune the Batch size setting, in version 2.0 of the BW connector.
  • In version 1.0 of the BW connector, we might encounter RFC errors while handling large datasets. The solution here, is to partition datasets into smaller chunks such as weekly, monthly or yearly.
  • It is important to ensure we reduce the amount of stress on the SAP Server.

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

5. Advanced Tracing Features

  • We can trace MDX statements generated by BW, by enabling environment variables. This enhances logging by including the MDX statements, which will be useful for debugging. This can be enabled or disabled as needed.
  • MDX statements generated can be analyzed in the SAP GUI by using MDXTEST transaction code.
  • Environment variable PBI_EnableSapBwTracing should be set to True.
  • CPIC_TRACE environment variable should be set to 3 for CPIC tracing (debug authentication issues).

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

Trace Files:

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

6. SQL Profiler for Tracing

  • When using the HANA Connector, the SQL Profiler can help trace query statements.
  • We could make use of Events like Query Begin, Query End, Direct Query Begin and Direct Query End, to write statements into trace files.
  • Connect SQL Profiler to Power BI Desktop to get required query traces for analysis.
  • Deep-dive analysis can help understand the query path flow and performance of the query statements.

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

tips-best-practices-using-sap-bw-sap-hana-connectors-power-bi

 

If you would like our team to walk you through the detailed findings, you can reach out to us here.

Subscribe to our Newsletter

The post Tips & Best Practices when using SAP BW and SAP HANA Connectors in Power BI appeared first on Visual BI Solutions.

Persisting Calculation Model Output using Flow Graphs in HANA

$
0
0

In any HANA system, as requirements grow more complex, the HANA views start getting complex as well. This leads us down a slippery slope where, to drive adoption we need more rules and coding in more rules make the views even more complex, affecting performance and ultimately, adoption. In this blog, let us see how using Flow Graphs in HANA can help in such situations.

Performance issues are common when complicated HANA calculation models are consumed through live data connections in BOBJ tools or SAP Analytics Cloud. For instance in BOBJ, when a filter is applied to the story or if data needs to be refreshed, the live connection executes all the joins and calculations in the calculation view, to load the data.

One easy way to solve this problem is to persist data from the calculation model in an object and consume this persisted data in BOBJ / Analytics tool of choice. This can improve performance tremendously. In a landscape where we have HANA as the database with Smart Data Integration (SDI), tables are the only objects that can store data. To persist a calculation view in a persisted table in HANA, we can either write a procedure or use a Flow Graph.

 

Flow Graphs

A flow graph is usually considered an operator which helps transform data from a remote source into SAP HANA either in batch or real-time mode.

But, a flow graph can also be used to input data from a calculation model to a template table. It can also be easily scheduled. By doing this, the calculation model output can be persisted without writing any code and the inbuilt HANA functions can take care of the rest.

 

Steps to Persist Calculation Model Output using Flow Graphs in HANA

  1. In Web IDE, right click on a package
  2. Click on New -> Flow Graph. Give a name for the Flow Graph

persisting-calculation-model-output-using-flow-graphs-HANA

 

3. Select Data Source from the left and drop it in the Content area of the Flow Graph

persisting-calculation-model-output-using-flow-graphs-HANA

 

4. Select a Calculation View

persisting-calculation-model-output-using-flow-graphs-HANA

 

5. Now select Template Table from the left and drop it in the Content of the Flow Graph

persisting-calculation-model-output-using-flow-graphs-HANA

 

6. Connect the Data Source and the Template Table

persisting-calculation-model-output-using-flow-graphs-HANA

 

7. Double click on the Template Table to change the Output Table name and Schema, as required

persisting-calculation-model-output-using-flow-graphs-HANA

 

8. Save the Flow Graph and click on the Execute button to load data

9. The Flow Graph can also be scheduled to perform batch load

 

Learn more about Visual BI’s SAP HANA Offerings here.

Subscribe to our Newsletter

The post Persisting Calculation Model Output using Flow Graphs in HANA appeared first on Visual BI Solutions.

Implementing a KPI Tile Grid in SAP Lumira using a Single Component

$
0
0

When we mention dashboards, we visualize charts, tables & numbers. Different types of dashboards appeal to different types of users. While a Financial Analyst may want to consume data in numeric & tabular formats, an Operations Manager may prefer insights that are aggregated using a combination of charts & tables. Senior Executives generally find KPI Tiles & Bullet Charts more appealing than charts & tables, since these visualizations deliver instantaneous performance insights. In this blog, let’s look at how I delivered an mxn KPI Tile Grid in SAP Lumira for a customer, using ONE component.

implementing-KPI-tile-grid-sap-lumira-using-single-component

 

Typically, such a requirement would have been a complex endeavor needing multiple elements such as container components, header texts, data values, comparison values and footer texts, for each metric displayed. A responsive design while also using so many elements would have been challenging as well.

However, I implemented a simple, clean & responsive design using Visual BI’s ‘Advanced KPI Tile’component. This component catered to all my needs with its multiple in-built features that include value-added features such as conditional formatting, icons and embedded charts. Here’s how I did it.

  1. Load a single Advanced KPI Tile

implementing-KPI-tile-grid-sap-lumira-using-single-component

 

2. Set up the mxn grid

Click on the component and navigate to the Additional Properties sheet. The default layout that comes with the component is completely customizable. In the image below, I have customized the Tile layout design to get 21 different tiles (3 rows x 7 columns). Each cell that you see here is a ‘container’ with its unique ID – which helps determine which cell has been clicked.

implementing-KPI-tile-grid-sap-lumira-using-single-component

 

3. Map the data source

Connect the component to a data source that would provide the data required for all the containers in the 21 KPI tiles.

 

4. Map the data elements and format them

Map the available data and set up conditional formatting and sparklines. The output is as seen in the first image.

In short, using a single component to deliver such a complex arrangement helped make the dashboard a whole lot simpler, leaner and faster. It was exciting to leverage this one component to implement the 21 distinct and responsive tiles for the end users.

 

Learn more about Visual BI’s SAP Lumira 2.2 Offerings here.

Subscribe to our Newsletter

The post Implementing a KPI Tile Grid in SAP Lumira using a Single Component appeared first on Visual BI Solutions.

Understanding Fixed Level of Detail (LOD) in Tableau – Part 1

$
0
0

In our previous blog on LOD in Tableau, we understood the concept and applications of LOD. In this blog we will focus on Fixed LOD.

What does Fixed LOD do?

The measure values are computed only for the dimension specified in the expression.

Syntax of Fixed LOD Expression

{Fixed [Dimension 1], [Dimension 2], …: Aggregation}

Let’s look at an example to understand better.

Exercise 1:

Build the following view

Understanding Fixed Level of Detail (LOD) in Tableau - Part 1

Image 1

 

Adding Sub-Category dimension to the view, we can notice that Sale value changes.

Understanding Fixed Level of Detail (LOD) in Tableau - Part 1

Image 2

The Sale value is computed now at Segment, Category and Sub-Category level whereas previously it was calculated at Segment and Category level only.

Exercise 2:

Repeat the steps mentioned in Exercise 1 and replace the Sum (Sales) with the following expression

{Fixed Segment, Category: Sum (Sales)}

Understanding Fixed Level of Detail (LOD) in Tableau - Part 1

Image 3

Comparing Image 2 and Image 3 we can notice Sale values are not same even though the same set of dimensions have been used.

Also, we can notice that the Sale values shown in Image 3 are as same as the ones in Image 1.

The expression that we had used in Exercise 2 is as follows

{Fixed Segment, Category: Sum (Sales)}

Even though Sub-Category is added to the view the computation is done only for Segment and Category dimensions as we have mentioned in the expression to fix the sum (Sales) to only Category and Segment dimensions

In the subsequent blogs we will look at how dimension filters and context filters affect the Fixed LOD expressions

Subscribe to our Newsletter

The post Understanding Fixed Level of Detail (LOD) in Tableau – Part 1 appeared first on Visual BI Solutions.


Visual BI makes its Debut at the Gartner 2019 Data & Analytics Summit

$
0
0

gartner-data-analytics-summit-2019-visualbi-booth-num-913-event

Visual BI Participates in Gartner Data & Analytics Summit, Plano, Texas. March 14th, 2019 –- Visual BI Solutions, a niche Business Intelligence (BI) & Analytics firm and a Microsoft Gold & SAP Silver Partner, today announced its participation in the upcoming Gartner Data & Analytics Summit on March 18-21 in Orlando at Booth #913.

 

In this event, Visual BI will be providing an exclusive demo of their following BI products that help enterprises deliverable actionable insights:

  1. Value Driver Tree- available for Microsoft Power BI and SAP Lumira. This product enables enterprise users to visualize business models and conduct their planning, forecasting & simulations in an agile manner.
  2. VBI View: The One Enterprise BI Portal to Manage Multiple BI Platforms. This product enables enterprise users to consume dashboards & reports from multiple platforms and comes with value-added features such as auto-sync, metadata management, usage reporting & more

 

In addition, Visual BI will be exhibiting a variety of other offerings including but not limited to BI executive workshops, quick-start engagements, strategy/ roadmap sessions, BI cloud migration & corporate BI training.

“Gartner Data & Analytics Summit will be a platform that offers tremendous opportunity for us to showcase our unique value driven products offering agile analytics techniques and customer success stories covering Microsoft & SAP Analytics, with both business users and technical support teams alike,” said Gopal Krishnamurthy, Founder/CEO, Visual BI Solutions.

 

Gartner Data & Analytics Summit 2019- Click here to view the agenda and register for the event.

About the Gartner Data & Analytics Summit 2019

Data and analytics leaders are fueling digital transformation, creating monetization opportunities, improving the customer experience and reshaping industries. Gartner Data & Analytics Summit 2019 provides the tools to build on the fundamentals of data management, business intelligence (BI), and analytics; harness innovative technologies such as artificial intelligence (AI), blockchain and the Internet of Things (IoT); and accelerate the shift toward a data-driven culture to lead the way to better business outcomes.

 

About VISUAL BI

Visual BI is a niche provider of BI & Analytics products, services & solutions. Headquartered in Plano, Texas, Visual BI has won recognition from customers for driving BI excellence by leveraging a team of platinum-level experts. Some of the world’s trusted brands rely on Visual BI’s expertise to drive BI adoption and to deliver actionable insights to their decision-makers and executives.

Highlights:

  • Best Companies to Work for in Texas, 2018
  • Ranked in the Top 50 in Deloitte Technology Fast 500, 2015
  • Ranked by CIOReview as one of the Top 100 Big Data Companies in the US
  • Microsoft Gold Partner for Data Analytics & SAP Silver Partner
  • Dedicated Visual BI Labs facility in Carrolton, TX, driving R&D and BI innovations

Visual BI’s end-to-end BI expertise covers platforms such as SAP Business Warehouse, SAP BusinessObjects BI solutions, SAP HANA®, Cloud Enablement & Integration (Azure, AWS, SCP), Big Data, advanced analytics and visualization tools such as SAP Lumira, Microsoft Power BI, Tableau, TIBCO Spotfire and more.

For more information, please visit http://visualbi.com/

Subscribe to our Newsletter

The post Visual BI makes its Debut at the Gartner 2019 Data & Analytics Summit appeared first on Visual BI Solutions.

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

$
0
0

In the previous blog, we focused on understanding the concept of Fixed LOD in Tableau. In this blog let’s try understanding it with an example.

Scenario:

We would like to understand the purchasing pattern of customers. As the first step in this analysis we will try to find out how many customers made how many Orders/Purchases.

E.g. How many customers made 5 Orders.

Let’s try to understand the question better with an example.

Build the following view:

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

Image 1

We can see than Customer Cynthia Arntzen has placed seven orders, Cythia Voltz nine orders and Cynthia Delaney five orders.

Therefore, we know that one customer has placed seven orders, one customer has placed nine orders and one customer has placed five orders.

But the business scenario is to find out how many customers placed seven/nine or five orders.

Providing solution by scrolling down the above view and noting down how many customers have placed a certain number of orders is very cumbersome.

We can try using Countd(Order ID).

 

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

Image 2

This view shows us how many orders each customer has placed and not How many customers placed how many orders.

Solution:

Using Fixed LOD: {FIXED [Customer Name]: COUNTD([Order ID])}

The expression can be translated as follows: For each Customer how many unique orders were placed.

This expression calculates how many Orders were placed by customers, but what is the significance?

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

Image 3

The total number of Orders made by each Customer will be represented as a separate dimension.

We can see from the above image that a certain customer has made THREE orders in total, another customer has made SEVEN orders in total. The total number of Orders will be treated as values of a dimension. Let’s apply the Fixed LOD expression to understand better.

{FIXED [Customer Name]: COUNTD([Order ID])}

After creating a calculation with above mentioned Fixed syntax drag and drop the calculation under dimensions pane. We do this to treat the output of calculation as dimension and not aggregate. Use the same dimension and place it on column shelf and Countd(Customer Name) or rows shelf.

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

Image 4

We can find how many Customers placed how many orders.

From the analysis, we have found that 134 customers have placed 5 Orders.

In the subsequent blogs, we will look at understanding other LOD concepts.

Subscribe to our Newsletter

The post Understanding Fixed Level of Detail (LOD) in Tableau – Part 2 appeared first on Visual BI Solutions.

Intuitive Visualization for BW Hierarchical Data

$
0
0

This blog takes a quick look at how the Sunburst Chart is a very intuitive visualization for BW hierarchical data. A Sunburst Chart is a multi-level pie chart used to represent the proportion of the different values at each level of a hierarchy. This chart is often compared to the Treemap Chart, which is also used to show hierarchical data. A Sunburst Chart does a better job of showing how the data at a particular level of the hierarchy relates to the data at each of the higher and lower hierarchy levels. It also clearly displays the full depth of the hierarchy. This chart is part of Visual BI Extensions (VBX) for SAP Lumira Designer.

 

Clear view of hierarchical data

The concentric rings represent the different levels of the hierarchy, the innermost being the top of the hierarchy. The slices in the rings clearly indicate the categorical grouping at each level of the hierarchy. The width of the slices indicates their proportionate values within the parent group. The variation in the width of the slices helps in intuitive visualization of how they compare with each other. As you can see in the chart below, Illinois and Massachusetts had maximum sales.

intuitive-visualization-BW-hierarchical-data intuitive-visualization-BW-hierarchical-data

 

Ideal for viewing proportions

The following illustration shows the tiered structure of the chart and also highlights how easy it is to expand or collapse the hierarchy to focus on a particular hierarchy level and get instant insights into how the categories at that level stack up against each other.

intuitive-visualization-BW-hierarchical-data

 

Drill down to deeper categorical levels

This illustrates how the chart changes dynamically in response to the drill up or drill down through the BW hierarchy in the table.

intuitive-visualization-BW-hierarchical-data

 

When space is not a constraint, this chart can be used to paint a complete picture of the BW hierarchy and to draw attention to the details with its stunning visualization.

 

Learn more about Visual BI Extensions (VBX) for SAP Lumira Designer here.

Subscribe to our Newsletter

The post Intuitive Visualization for BW Hierarchical Data appeared first on Visual BI Solutions.

Top N Ranking on Dynamically Selected KPI in HANA

$
0
0

One of the most common dashboard feature requests from clients is for the Top N/Bottom N functionality. As we know, this is pretty straightforward to implement. But what if the client wants the Top N or Bottom N ranking based on a KPI that is selected dynamically at runtime? In this blog, we list down the steps for Top N Ranking on dynamically selected KPI in HANA, using a single Measure. The KPI on which the ranking is to be based is passed as Input Parameter.

1. Create a HANA View, drag and drop a Projection, map it to a source with the required KPIs and Dimensions.

Note: There needs to be at least one Dimension with a single member.

2. Create an Input Parameter to receive a numeric value (here 1,2,3 etc.) corresponding to the Measure (KPI), based on which Top N Ranking needs to be done. Setting a default value is optional. Here, we set it to 1.

top-N-ranking-on-dynamically-selected-KPI-HANA

 

3. Create a Calculated Column in Projection (here, ‘KPI_VALUE’) and write a Case Statement to check the Input Parameter value and return the corresponding Measure for Top N Ranking.

top-N-ranking-on-dynamically-selected-KPI-HANA

Parameter Value Measure returned
1 State Bottle Retail
2 Volume_Sold_(Litres)
Any other value Bottles Sold

 

4. Create an Aggregation on top of the Projection, so that data gets aggregated and has unique values for Top N Ranking. Also, ensure you have converted all the KPIs/Measures to Aggregated Columns.

top-N-ranking-on-dynamically-selected-KPI-HANA top-N-ranking-on-dynamically-selected-KPI-HANA

 

5. Create a Rank Node on top of the Aggregation Node with the settings below :

  • Choose Top N or Bottom N for the Sort Direction.
  • Set a fixed Threshold value (here, 5) or use an Input parameter for N.
  • In the Order By field, choose the Calculated Column that was created (‘KPI_VALUE’).
  • For Partition By, choose the Dimension with the single member. In this example, we restrict Year to static value ‘2018’.
  • Enable ‘Dynamic Partition Elements’ and ‘Generate Rank Column’.

top-N-ranking-on-dynamically-selected-KPI-HANA

 

6. Map this Rank Node to the available Aggregation Node as Input and ensure all the columns including ‘KPI_Value’ & ‘Rank_Column’ are propagated to semantics.

7. Change ‘Rank_Column’ to type Dimension, ‘KPI_VALUE’  to Measure and activate the HANA View. Execute the View and you will be prompted for the Input Parameter.

8. After entering a value for the Input Parameter, drag and drop the Dimension with the single member into Rows, since it was used for partitioning, as also any other required Dimensions (here, ‘Store_Location’). Place ‘KPI_VALUE’ in Columns. The output as seen below displays Top 5 Store Locations by ‘State_Bottle_Retail’. Note that ‘KPI_VALUE’ column displays the same values as ‘State_Bottle_Retail’.

top-N-ranking-on-dynamically-selected-KPI-HANA

 

9. Now if you change the Input Parameter value to ‘2’, it displays Top 5 Store Locations by ‘Volume_Sold_(Liters)’. Note that ‘KPI_VALUE’ displays the same values as ‘Volume_Sold_(Liters)’.

top-N-ranking-on-dynamically-selected-KPI-HANA top-N-ranking-on-dynamically-selected-KPI-HANA

 

The steps explained above clearly show how we only have to use one single Measure in the Reporting layer (here, ‘KPI_VALUE’) to display Top N data based on selected KPI.

 

Learn more about Visual BI’s SAP HANA Offerings here.

Subscribe to our Newsletter

The post Top N Ranking on Dynamically Selected KPI in HANA appeared first on Visual BI Solutions.

Drag and Drop Components at Runtime in SAP Lumira Designer

$
0
0

With the emergence of self-service BI tools that allow users to easily modify the layout of visualizations, a feature such as drag and drop of components into the canvas at runtime is very helpful. Visual BI Extensions (VBX) has a suite of components that offer a wide variety of features that greatly enhance the visualization capabilities of SAP Lumira Designer. The VBX Script Box is one such powerful component that helps leverage the extensive capabilities of JavaScript/jQuery inside Lumira Designer. This blog walks us through the steps to drag and drop components at runtime in SAP Lumira Designer.

This solution supports both native and VBX components since the VBX Script Box consider all components as HTML elements.

drag-drop-components-at-runtime-in-sap-lumira-designer

 

Setting up the Dashboard

The example dashboard below contains one VBX Column-Bar Chart, native Lumira Designer Pie Chart and a Crosstab. Create a placeholder area with the required number of panels (or any other container component) which will be the target for the component drop.

drag-drop-components-at-runtime-in-sap-lumira-designer

 

Adding the VBX Script Box

Add a VBX Script Box to the application with the scripts given below, to enable drag and drop. You would need the IDs of the components to be made draggable and the target containers for the drop.

 

Setting the ‘Draggable’ attribute

Enabling the ‘draggable’ attribute of a component allows the user to move the component at runtime (drag operation)

$(“#__component1”).attr(“draggable”, “true”);

 

Functions for Drag and Drop

JavaScript/jQuery functions can be written to handle the three different events – drag of component, drop of component in the target and drag of component over the target.

On Drag of component

The drag(ev) function gets called when the component that is set as draggable is dragged. This sets the ID of the dragged component in the dataTransfer object using the setData method. Here, target refers to the component being dragged.

function drag(ev)

 {

  ev.dataTransfer.setData(“text”, ev.target.id);

  }

 

On Drop of component

This function gets called when the component is dropped in a target panel. The default behaviour for the drop event is prevented, the ID of the dragged component is obtained using the getData method and is then appended to the target container.

function drop(ev)

 {

  ev.preventDefault();

  var data = ev.dataTransfer.getData(“text”);

  ev.target.appendChild(document.getElementById(data));

  }

 

On Drag of the component over the target

The allowDrop function gets called when the component is dragged over the target. This just prevents the default behavior for the event.

function allowDrop(ev)

 {

ev.preventDefault();

}

 

Binding the functions

Now, bind the event handler functions defined above to the corresponding events of the components.

$(“#__component1”).attr(“ondragstart”,”drag(ev)”); // drag(event) function called when the component is dragged

$(“#PANEL_1_panel1”).attr(“ondrop”,”drop(ev)”); // drop(event) function called when the dragged component is dropped in the container panel

$(“#PANEL_1_panel1”).attr(“ondragover”,”allowDrop(ev)”);// allowDrop(event) function called when the component is  dragged over the container panel.

 

CSS can be used for any formatting as required if, for instance, the dropped component has to fill up the target container area. The setInterval() method can be used if you would like to proactively listen to/track drag drop events at regular intervals.

In addition to rendering an enhanced, near-self-service drag & drop experience, since the changes done using JavaScript are persistent in the exported PDF files as well, the user can download customized report extracts.

Note: The ‘draggable’ attribute is a feature of HTML5 supported in most browsers and in IE 9.0 and above.

 

Stay tuned to this space for more customizability with runtime addition of containers to the placeholder area. Learn more about Visual BI’s SAP Lumira Offerings here.

Subscribe to our Newsletter

The post Drag and Drop Components at Runtime in SAP Lumira Designer appeared first on Visual BI Solutions.

Understanding Exclude Level of Detail (LOD) in Tableau

$
0
0

In our previous blogs on LOD in Tableau we covered the concepts of Include and Fixed LOD expressions and their use cases. In this blog, we will focus on Exclude LOD in Tableau.

What is Exclude LOD?

Computations are performed for all dimensions present in the view except for the dimension(s) mentioned in the expression.

{Exclude [Dimension1], [Dimension2] … [Dimension n]: Aggregation}

Let’s understand Exclude LOD through an example.

Build the following view:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 1

Currently sales are being computed at Region and State level.

Now add city to the view:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 2

In the above image the Sales are being computed at Region, State and City level.

Create the Exclude LOD calculation and add it to the view.

{EXCLUDE [City] : sum([Sales]) }

Understanding Exclude Level of Detail (LOD) in Tableau

Image 3

The column with Exclude expression is different from the Sales column.

Since we have mentioned in the LOD calculation to Exclude City dimension when computing the sales, the sales is computed only at Region and State level. This can be verified by comparing sales value of Illinois state in Image 3 and Image 2.

Let’s look at applying Exclude LOD expression in a real-time scenario.

Scenario:

We need to find out the difference between Sales value of two Sub-categories with the flexibility of choosing the Sub-Category of our interest as the reference.

Solution:

Build a Sales by Sub-Category bar chart.

Create a Parameter with field values from Sub-Category and create a calculation as shown below:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 4

Adding above expression to the view we get following output:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 5

As mentioned in the calculation the sales of only the selected parameter value (in this case Accessories) is shown.

Create the following expression:

{EXCLUDE [Sub-Category] : sum([Selected Sub Category Sales])}

This expression does the computation by not calculating sales for every Sub-Category.

Since Sub-Category is the only dimension present in the view adding the exclude expression gives an output that would be as same as using the expression in a sheet for selected parameter without any dimensions. The output is shown below:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 6

Adding the expression to the previous view we get the following output:

Understanding Exclude Level of Detail (LOD) in Tableau

Image 7

We can see the values are the same as in Image 6.

We can subtract the Exclude expression calculation’s output with actual sales of every category. This will show the difference between sales of Accessories Sub-Category and other Sub-Categories.

Sum([Sales])-Sum ([Exclude Sub Category])

Adding this to the view we can easily compare the difference in sales between a selected Sub-Category from parameter and other Sub-Categories.

Understanding Exclude Level of Detail (LOD) in Tableau

Image 8

In subsequent blogs, we will cover other functionalities of Tableau.

Contact us today to learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post Understanding Exclude Level of Detail (LOD) in Tableau appeared first on Visual BI Solutions.

Persisting Calculation Model Output using Flow Graphs in HANA

$
0
0

In any HANA system, as requirements grow more complex, the HANA views start getting complex as well. This leads us down a slippery slope where, to drive adoption we need more rules and coding in more rules make the views even more complex, affecting performance and ultimately, adoption. In this blog, let us see how using Flow Graphs in HANA can help in such situations.

Performance issues are common when complicated HANA calculation models are consumed through live data connections in BOBJ tools or SAP Analytics Cloud. For instance in BOBJ, when a filter is applied to the story or if data needs to be refreshed, the live connection executes all the joins and calculations in the calculation view, to load the data.

One easy way to solve this problem is to persist data from the calculation model in an object and consume this persisted data in BOBJ / Analytics tool of choice. This can improve performance tremendously. In a landscape where we have HANA as the database with Smart Data Integration (SDI), tables are the only objects that can store data. To persist a calculation view in a persisted table in HANA, we can either write a procedure or use a Flow Graph.

 

Flow Graphs

A flow graph is usually considered an operator which helps transform data from a remote source into SAP HANA either in batch or real-time mode.

But, a flow graph can also be used to input data from a calculation model to a template table. It can also be easily scheduled. By doing this, the calculation model output can be persisted without writing any code and the inbuilt HANA functions can take care of the rest.

 

Steps to Persist Calculation Model Output using Flow Graphs in HANA

  1. In Web IDE, right click on a package
  2. Click on New -> Flow Graph. Give a name for the Flow Graph

persisting-calculation-model-output-using-flow-graphs-HANA

 

3. Select Data Source from the left and drop it in the Content area of the Flow Graph

persisting-calculation-model-output-using-flow-graphs-HANA

 

4. Select a Calculation View

persisting-calculation-model-output-using-flow-graphs-HANA

 

5. Now select Template Table from the left and drop it in the Content of the Flow Graph

persisting-calculation-model-output-using-flow-graphs-HANA

 

6. Connect the Data Source and the Template Table

persisting-calculation-model-output-using-flow-graphs-HANA

 

7. Double click on the Template Table to change the Output Table name and Schema, as required

persisting-calculation-model-output-using-flow-graphs-HANA

 

8. Save the Flow Graph and click on the Execute button to load data

9. The Flow Graph can also be scheduled to perform batch load

 

Learn more about Visual BI’s SAP HANA Offerings here.

Subscribe to our Newsletter

The post Persisting Calculation Model Output using Flow Graphs in HANA appeared first on Visual BI Solutions.


Intuitive Visualization for BW Hierarchical Data

$
0
0

This blog takes a quick look at how the Sunburst Chart is a very intuitive visualization for BW hierarchical data.  A Sunburst Chart is a multi-level pie chart used to represent the proportion of the different values at each level of a hierarchy. This chart is often compared to the Treemap Chart, which is also used to show hierarchical data. A Sunburst Chart does a better job of showing how the data at a particular level of the hierarchy relates to the data at each of the higher and lower hierarchy levels. It also clearly displays the full depth of the hierarchy. This chart is part of Visual BI Extensions (VBX) for SAP Lumira Designer.

 

Clear view of hierarchical data

The concentric rings represent the different levels of the hierarchy, the innermost being the top of the hierarchy. The slices in the rings clearly indicate the categorical grouping at each level of the hierarchy. The width of the slices indicates their proportionate values within the parent group. The variation in the width of the slices helps in intuitive visualization of how they compare with each other. As you can see in the chart below, Illinois and Massachusetts had maximum sales.

intuitive-visualization-bw-hierarchical-dataintuitive-visualization-bw-hierarchical-data

 

Ideal for viewing proportions

The following illustration shows the tiered structure of the chart and also highlights how easy it is to expand or collapse the hierarchy to focus on a particular hierarchy level and get instant insights into how the categories at that level stack up against each other.

Intuitive Visualization for BW Hierarchical Data

 

Drill down to deeper categorical levels

This illustrates how the chart changes dynamically in response to the drill up or drill down through the BW hierarchy in the table.

Intuitive Visualization for BW Hierarchical Data

When space is not a constraint, this chart can be used to paint a complete picture of the BW hierarchy and to draw attention to the details with its stunning visualization.

 

Learn more about Visual BI Extensions (VBX) for SAP Lumira Designer here.

Subscribe to our Newsletter

The post Intuitive Visualization for BW Hierarchical Data appeared first on Visual BI Solutions.

Understanding Fixed Level of Detail (LOD) in Tableau – Part 1

$
0
0

In our previous blog on LOD in Tableau, we understood the concept and applications of LOD. In this blog we will focus on Fixed LOD.

 

What does Fixed LOD do?

The Measure values are computed only for the dimension specified in the expression.

Syntax of Fixed LOD Expression:

{Fixed [Dimension 1], [Dimension 2], …: Aggregation}

Let’s look at an example to understand better.

 

Exercise 1:

Build the following view:

Understanding Fixed Level of Detail (LOD) in Tableau, Part 1

Adding a Sub-Category dimension to the view, we can notice that Sale value changes.

Understanding Fixed Level of Detail (LOD) in Tableau, Part 1

The Sale value is computed now at Segment, Category and Sub-Category level whereas previously it was calculated at Segment and Category level only.

 

Exercise 2:

Repeat the steps mentioned in Exercise 1 and replace the Sum (Sales) with the following expression

{Fixed Segment, Category: Sum (Sales)}

Understanding Fixed Level of Detail (LOD) in Tableau, Part 1

Comparing Image 2 and Image 3 we can notice Sale values are not the same even though the same set of dimensions have been used. Also, we can notice that the Sale values shown in Image 3 are as same as the ones in Image 1.

The expression that we had used in Exercise 2 is as follows,

{Fixed Segment, Category: Sum (Sales)}

Even though Sub-Category is added to the view the computation is done only for Segment and Category dimensions as we have mentioned in the expression to fix the sum (Sales) to only Category and Segment dimensions

 

In the subsequent blogs, we will look at how dimension filters and context filters affect the Fixed LOD expressions

Subscribe to our Newsletter

The post Understanding Fixed Level of Detail (LOD) in Tableau – Part 1 appeared first on Visual BI Solutions.

Visual BI makes its Debut at the Gartner 2019 Data & Analytics Summit

$
0
0

gartner-data-analytics-summit-2019-visualbi-booth-num-913-event

Visual BI Participates in Gartner Data & Analytics Summit, Plano, Texas. March 14th, 2019 –- Visual BI Solutions, a niche Business Intelligence (BI) & Analytics firm and a Microsoft Gold & SAP Silver Partner, today announced its participation in the upcoming Gartner Data & Analytics Summit on March 18-21 in Orlando at Booth #913.

 

In this event, Visual BI will be providing an exclusive demo of their following BI products that help enterprises deliverable actionable insights:

  1. Value Driver Tree- available for Microsoft Power BI and SAP Lumira. This product enables enterprise users to visualize business models and conduct their planning, forecasting & simulations in an agile manner.
  2. VBI View: The One Enterprise BI Portal to Manage Multiple BI Platforms. This product enables enterprise users to consume dashboards & reports from multiple platforms and comes with value-added features such as auto-sync, metadata management, usage reporting & more

 

In addition, Visual BI will be exhibiting a variety of other offerings including but not limited to BI executive workshops, quick-start engagements, strategy/ roadmap sessions, BI cloud migration & corporate BI training.

“Gartner Data & Analytics Summit will be a platform that offers tremendous opportunity for us to showcase our unique value driven products offering agile analytics techniques and customer success stories covering Microsoft & SAP Analytics, with both business users and technical support teams alike,” said Gopal Krishnamurthy, Founder/CEO, Visual BI Solutions.

 

Gartner Data & Analytics Summit 2019- Click here to view the agenda and register for the event.

About the Gartner Data & Analytics Summit 2019

Data and analytics leaders are fueling digital transformation, creating monetization opportunities, improving the customer experience and reshaping industries. Gartner Data & Analytics Summit 2019 provides the tools to build on the fundamentals of data management, business intelligence (BI), and analytics; harness innovative technologies such as artificial intelligence (AI), blockchain and the Internet of Things (IoT); and accelerate the shift toward a data-driven culture to lead the way to better business outcomes.

 

About VISUAL BI

Visual BI is a niche provider of BI & Analytics products, services & solutions. Headquartered in Plano, Texas, Visual BI has won recognition from customers for driving BI excellence by leveraging a team of platinum-level experts. Some of the world’s trusted brands rely on Visual BI’s expertise to drive BI adoption and to deliver actionable insights to their decision-makers and executives.

Highlights:

  • Best Companies to Work for in Texas, 2018
  • Ranked in the Top 50 in Deloitte Technology Fast 500, 2015
  • Ranked by CIOReview as one of the Top 100 Big Data Companies in the US
  • Microsoft Gold Partner for Data Analytics & SAP Silver Partner
  • Dedicated Visual BI Labs facility in Carrolton, TX, driving R&D and BI innovations

Visual BI’s end-to-end BI expertise covers platforms such as SAP Business Warehouse, SAP BusinessObjects BI solutions, SAP HANA®, Cloud Enablement & Integration (Azure, AWS, SCP), Big Data, advanced analytics and visualization tools such as SAP Lumira, Microsoft Power BI, Tableau, TIBCO Spotfire and more.

For more information, please visit http://visualbi.com/

Subscribe to our Newsletter

The post Visual BI makes its Debut at the Gartner 2019 Data & Analytics Summit appeared first on Visual BI Solutions.

Top N Ranking on Dynamically Selected KPI in HANA

$
0
0

One of the most common dashboard feature requests from clients is for the Top N/Bottom N functionality. As we know, this is pretty straightforward to implement. But what if the client wants the Top N or Bottom N ranking based on a KPI that is selected dynamically at runtime? In this blog, we list down the steps for Top N Ranking on dynamically selected KPI in HANA, using a single Measure. The KPI on which the ranking is to be based is passed as Input Parameter.

1. Create a HANA View, drag and drop a Projection, map it to a source with the required KPIs and Dimensions.

Note: There needs to be at least one Dimension with a single member.

2. Create an Input Parameter to receive a numeric value (here 1,2,3 etc.) corresponding to the Measure (KPI), based on which Top N Ranking needs to be done. Setting a default value is optional. Here, we set it to 1.

top-n-ranking-on-dynamically-selected-kpi-hana

 

3. Create a Calculated Column in Projection (here, ‘KPI_VALUE’) and write a Case Statement to check the Input Parameter value and return the corresponding Measure for Top N Ranking.

top-n-ranking-on-dynamically-selected-kpi-hana

Parameter Value Measure returned
1 State Bottle Retail
2 Volume_Sold_(Litres)
Any other value Bottles Sold

 

4. Create an Aggregation on top of the Projection, so that data gets aggregated and has unique values for Top N Ranking. Also, ensure you have converted all the KPIs/Measures to Aggregated Columns.

top-n-ranking-on-dynamically-selected-kpi-hana top-n-ranking-on-dynamically-selected-kpi-hana

 

5. Create a Rank Node on top of the Aggregation Node with the settings below :

  • Choose Top N or Bottom N for the Sort Direction.
  • Set a fixed Threshold value (here, 5) or use an Input parameter for N.
  • In the Order By field, choose the Calculated Column that was created (‘KPI_VALUE’).
  • For Partition By, choose the Dimension with the single member. In this example, we restrict Year to static value ‘2018’.
  • Enable ‘Dynamic Partition Elements’ and ‘Generate Rank Column’.

top-n-ranking-on-dynamically-selected-kpi-hana

 

6. Map this Rank Node to the available Aggregation Node as Input and ensure all the columns including ‘KPI_Value’ & ‘Rank_Column’ are propagated to semantics.

7. Change ‘Rank_Column’ to type Dimension, ‘KPI_VALUE’  to Measure and activate the HANA View. Execute the View and you will be prompted for the Input Parameter.

8. After entering a value for the Input Parameter, drag and drop the Dimension with the single member into Rows, since it was used for partitioning, as also any other required Dimensions (here, ‘Store_Location’). Place ‘KPI_VALUE’ in Columns. The output as seen below displays Top 5 Store Locations by ‘State_Bottle_Retail’. Note that ‘KPI_VALUE’ column displays the same values as ‘State_Bottle_Retail’.

top-n-ranking-on-dynamically-selected-kpi-hana

 

9. Now if you change the Input Parameter value to ‘2’, it displays Top 5 Store Locations by ‘Volume_Sold_(Liters)’. Note that ‘KPI_VALUE’ displays the same values as ‘Volume_Sold_(Liters)’.

top-n-ranking-on-dynamically-selected-kpi-hana

top-n-ranking-on-dynamically-selected-kpi-hana

 

The steps explained above clearly show how we only have to use one single Measure in the Reporting layer (here, ‘KPI_VALUE’) to display Top N data based on selected KPI.

 

Learn more about Visual BI’s SAP HANA Offerings here.

Subscribe to our Newsletter

The post Top N Ranking on Dynamically Selected KPI in HANA appeared first on Visual BI Solutions.

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

$
0
0

In the previous blog, we focused on understanding the concept of Fixed LOD in Tableau. In this blog let’s try understanding it with an example.

 

Scenario:

We would like to understand the purchasing pattern of customers. As the first step in this analysis, we will try to find out how many customers made how many Orders/Purchases.

E.g. How many customers made 5 Orders.

Let’s try to understand the question better with an example.

Build the following view:

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

 

We can see than Customer Cynthia Arntzen has placed seven orders, Cythia Voltz nine orders and Cynthia Delaney five orders. Therefore, we know that one customer has placed seven orders, one customer has placed nine orders and one customer has placed five orders. But the business scenario is to find out how many customers placed seven/nine or five orders. Providing solution by scrolling down the above view and noting down how many customers have placed a certain number of orders is very cumbersome.

We can try using Countd(Order ID).

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

This view shows us how many orders each customer has placed and not How many customers placed how many orders.

 

Solution

Using Fixed LOD: {FIXED [Customer Name]: COUNTD([Order ID])}

The expression can be translated as follows: For each Customer how many unique orders were placed.

This expression calculates how many Orders were placed by customers, but what is the significance?

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

The total number of Orders made by each Customer will be represented as a separate dimension.

We can see from the above image that a certain customer has made THREE orders in total, another customer has made SEVEN orders in total. The total number of Orders will be treated as values of a dimension. Let’s apply the Fixed LOD expression to understand better.

{FIXED [Customer Name]: COUNTD([Order ID])}

After creating a calculation with above mentioned Fixed syntax drag and drop the calculation under dimensions pane. We do this to treat the output of calculation as a dimension and not aggregate. Use the same dimension and place it on column shelf and Countd(Customer Name) or rows shelf.

Understanding Fixed Level of Detail (LOD) in Tableau – Part 2

We can find how many Customers placed how many orders. From the analysis, we have found that 134 customers have placed 5 Orders.

 

In the subsequent blogs, we will look at understanding other LOD concepts.

Subscribe to our Newsletter

The post Understanding Fixed Level of Detail (LOD) in Tableau – Part 2 appeared first on Visual BI Solutions.

Viewing all 989 articles
Browse latest View live