Quantcast
Channel: Visual BI Solutions
Viewing all 989 articles
Browse latest View live

SAP Analytics Cloud – Application Design Series: 9 – Linked Analysis using Scripting

$
0
0

In the previous blog in this series, we learnt about Scripting APIs. In this blog, let us start with an interesting & custom implementation of Linked Analysis using scripting.

* * *

Linked Analysis

Linked Analysis enables SAP Analytics Cloud story widgets to dynamically interact with each other based on items selected in a widget. However, this feature is not enabled by default in Analytic Applications, where it must be implemented using scripting.

However, there is one upside to this: we can implement Linked Analysis the way we want in Analytic Applications. Note that the out-of-the-box Linked Analysis functionality provided for stories cannot do complex interactions, and it cannot be customized.

Our goal today is to achieve the chart interaction as shown below (as a side exercise, try to implement this in an SAP Analytics Cloud story, and figure out why it cannot be done).

SAP Analytics Cloud - Application Design Series: 9 - Linked Analysis using Scripting

Linked Analysis In Analytic Applications – An Example

There are three charts – Sales by Region, Sales by Category & Sales by Sub-Category.

Any selection in chart 1 (Region) must filter values in chart 2 & 3 (Category & Sub-Category). Any Selection in chart 2 (Category) must only filter values in chart 3 (Sub-Category), without setting any filters for chart 1 (Region).

Following are the steps involved to implement custom Linked Analysis in Analytic Applications.

1. Create the widgets

Here we start with the analytic application shown below, which does not have any implementation yet for Linked Analysis or filtering.

SAP Analytics Cloud - Application Design Series: 9 - Linked Analysis using Scripting

 

2. Filter for Region Selection

Add the following script to the onSelect() event of the first chart. This adds the selected region as a filter to the Category & Sub-Category charts.

var region_sel = Chart_Region.getSelection();
for ( var i=0; i < region_sel.length; i++ ) {
if ( region_sel[i].dimensionId === “Region” ) {
Chart_Category.getDataSource().setDimensionFilter ( “Region”, region_sel[i].id );
Chart_SubCategory.getDataSource().setDimensionFilter ( “Region”, region_sel[i].id );
}
}

SAP Analytics Cloud - Application Design Series: 9 - Linked Analysis using Scripting

Note that the onSelect() function does not support selection of multiple values as of now.

 

3. Filter for Category Selection

Add the following script to the onSelect() event of the second chart to filter category in the third chart.

var category_sel = Chart_Category.getSelection();
for ( var i=0; i < category_sel.length; i++ ) {
if ( category_sel[i].dimensionId === “Category” ) {
Chart_SubCategory.getDataSource().setDimensionFilter ( “Category”, category_sel[i].id);
}
}

SAP Analytics Cloud - Application Design Series: 9 - Linked Analysis using Scripting

Note that in this implementation, no filter is set for the first chart (Region). This flexibility allows us to implement Linked Analysis the way we want in Analytic Applications.

 

4. Add the Reset functionality

To help the user clear all the filters, add a button ‘Reset Filters’ on the top right.

SAP Analytics Cloud - Application Design Series: 9 - Linked Analysis using Scripting

Then in the onSelect() event of the button add the following script to clear the filter values.

 

Chart_Category.getDataSource().removeDimensionFilter(“Region”);

Chart_SubCategory.getDataSource().removeDimensionFilter(“Region”);

Chart_SubCategory.getDataSource().removeDimensionFilter(“Category”);

SAP Analytics Cloud - Application Design Series: 9 - Linked Analysis using Scripting

 

* * *

In the subsequent blog we will learn about Dynamically changing Measures and Dimensions in Analytic Application.

Reach out to our team here if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud – Application Design Series: 9 – Linked Analysis using Scripting appeared first on Visual BI Solutions.


SAP Analytics Cloud – Application Design Series: 10 – Dynamically changing Measures and Dimensions

$
0
0

In the previous blog in this series, we learnt about custom implementation of Linked Analysis using scripting. In this blog, let us see how to dynamically display measures and dimensions in widgets using scripting APIs.

* * *

Our goal is to let the user dynamically change the dimension and measure of a chart at run time as shown below. The checkbox is used to select the dimensions, and a dropdown is used to select the measure.

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

Dynamically assigning measures & dimensions to widgets

 

Following are the steps involved to implement this.

1. Create the widgets

Add a column chart which shows Sales by State. Then add Checkbox Group and Dropdown widgets as follows. We’ll configure these two widgets in the next step.

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

 

2. Configure the Selector Widgets

To configure the Checkbox Group/Dropdown we can either hardcode the values in the Builder options or populate the widgets using scripting. We’ll utilize both the methods here to respectively populate dimensions & measures.

Under the Builder options of the Checkbox Group add the values ‘State’ and ‘City’. Mark State as the default selection.

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

To populate the Dropdown widget, add the following script in the onInitialization() event of the main canvas. The script retrieves all members (from the Account dimension) and populates these values in the dropdown.

var measure_list = Chart_1.getDataSource().getMembers(“Account”);

for ( var i=0; i < measure_list.length; i++){

                Dropdown_1.addItem ( measure_list[i].id, measure_list[i].displayId );

}

Dropdown_1.setSelectedKey(“[Account].[parentId].&[Sales]”);

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

Save and run the application. You can find the values populated in selector widgets. Remember that we still have not added the script measure/dimension selection to be reflected in the chart.

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

 

3. Create Global Variables

Create two global variables chart_dim and chart_measure. The variable chart_dim is a string array that can store selections from Checkbox Group and chart_measure is a string variable that can store the selected measure from dropdown. Make sure to assign the technical name of measure ‘Sales’ as default value to the global variable chart_measure.

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

 

4. Configure the Checkbox Group for interaction

Add the following script in the onSelect() event of the Checkbox Group. This allows the user to choose one or more dimensions to be displayed on the chart. Note that we first clear all the existing dimensions from the chart before processing the selected values in the checkbox group.

if ( chart_dim.length > 0 ) {

                for ( var i=0; i < chart_dim.length; i++ ) {

                                Chart_1.removeDimension ( chart_dim[i], Feed.CategoryAxis );

                }

}

chart_dim = ArrayUtils.create(Type.string);

var dimension_list = CheckboxGroup_1.getSelectedKeys();

for ( var j=0; j < dimension_list.length; j++ ) {

                Chart_1.addDimension ( dimension_list[j], Feed.CategoryAxis );

                chart_dim.push( dimension_list[j]);

}

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

 

5. Configure the Dropdown for interaction

Add the following script to the onSelect() event of Dropdown. This allows the user to dynamically change measures. The procedure is similar, with the removal of existing measures happening first followed by the addition of selected measures from the dropdown.

Chart_1.removeMeasure ( chart_measure, Feed.ValueAxis );

chart_measure = Dropdown_1.getSelectedKey();

Chart_1.addMeasure ( chart_measure, Feed.ValueAxis );

SAP Analytics Cloud - Application Design Series: 10 - Dynamically changing Measures and Dimensions

Save & Run the application. Now you will be able to dynamically change the measures and dimensions.

* * *

In the subsequent blog we will learn about data explorer in analytic applications.

Reach out to our team here if you are interested to evaluate if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud – Application Design Series: 10 – Dynamically changing Measures and Dimensions appeared first on Visual BI Solutions.

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

$
0
0

In our previous blog, we have seen the introduction to HTML box and the Script box and how to include a Glossary Index in our dashboard. Here, in this blog we will be going over another scenario where we are dealing with multiple currencies for various measures/key figures. We have a Sales and Profit Dashboard where the key figures are displayed primarily in U.S. Dollars

(as shown in screenshot below). Snapshot of the Sales Vs Profit dashboard.

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

For the users who are outside of the US, the one thing that might cause an inconvenience is the currency. Some users might want to see the data flow in their native currency.

For instance, we can display the same value in 3 different currencies using the Script Box & HTML Box.

  1. US Dollars (USD)
  2. Indian Rupees (INR)
  3. European Union (Euro)

The traditional way is to model the data on the backend with the different types of currencies and provide an option to let the user switch between data internally. But that will take a big toll on time as well as effort. So, we came up with a solution.

First using the HTML box, we have designed a component as depicted in the below image.

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

To achieve this, we used the below code in our HTML Box

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

Now, as the UI is ready, we must provide the functionality to let the user switch between each currency.

Switching between each radio button should change the values in the Column/Bar chart and Line chart to the corresponding currency format the user wishes to see the data in. This can be done with the help of the Script box. We used the below code to convert the currency to selected format.

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

We know that the currency conversion rate varies from time to time. To get the real-time currency conversion rate, we are using an API which provides currency conversion rate for the time of request.

Credits to http://free.currencyconverterapi.com.

At first the charts display the Data in the US dollars (as shown in screenshot below)

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

Switch between the radio buttons and see the charts change in action (as shown in animation below)

Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2

To summarize, we have seen how to create a glossary index in Dashboard and also how to convert real-time data to a different currency format. we are looking at few possibilities of what we can achieve through the help of HTML and Script Box. A lot more can be done when these two powerful extensions combined.

Subscribe to our Newsletter

The post Dashboard Hacking with VBX HTML & Script Box for SAP Lumira Designer – Part 2 appeared first on Visual BI Solutions.

Customizing Hierarchy Measures in DAX using InScope

$
0
0

Using the new InScope function for DAX that was introduced in November 2018 release of Power BI desktop, we can now customize the calculation in measures according to the hierarchy level where they are currently being used.

To illustrate the same, we have a Power BI Report where we would like to compare the price of a product across various online retailers. We use the dimensions in a matrix, displaying them in a hierarchy. Now, using the regular measure we will not be able to customize how the measure behaves at each level of the hierarchy. For example, showing the aggregated value of unit price across the Product Name does not make any sense. Neither does showing the aggregated value of unit price across the Product Category.

Customizing hierarchy measures in DAX using InScope

Using the InScope function, we can create a new measure that is sensitive to the hierarchy level.

Customizing hierarchy measures in DAX using InScope

We specify that if the dimension for individual retailers is in scope i.e., measure is at the lowest hierarchy level, the usual values can be displayed. If the dimension for Product Name is in scope, we can specify that the measure shows the lowest price available for the product across all retailers.

The new measure works as below.

Customizing hierarchy measures in DAX using InScope

At the retailer level, it displays the price of the product on the website. At the Product Level, it displays the lowest price available for the product across all retailers. At the Product Category Level, no value is displayed.

Subscribe to our Newsletter

The post Customizing Hierarchy Measures in DAX using InScope appeared first on Visual BI Solutions.

Value Driver Trees or Traditional Planning Approaches?

$
0
0

At its most basic form, a Value Driver Tree is just a visual representation of a business model, with a mathematical or a logical relationship mapping the key business performance indicators (KPI’s) and the operational drivers driving the business. When coming up with a Value Driver Tree model for a business, one of the key aspects of success is the ability to use the model as a tool for planning and forecasting using simulations made on the tree.

The concept of planning or simulation is not something new – this is something that businesses have been doing for ages now. However, most businesses follow one or more traditional approaches:

1.  Spreadsheet-based Planning

value-driver-trees-traditional-planning-approaches

 

This is traditional, but a more ad-hoc approach to planning within an organization. This type of planning uses more conventional tools such as Microsoft Excel with formulas and macros to run different planning scenarios. Typically, in this type of a model, analysts extract data from existing forecasting or budgeting systems for planning. While this approach does allow ease-of-use for end users, it has a few shortcomings of its own:

  • When the business model scales up, the number of parameters required will increase, making it complex and clunky.
  • Data on the spreadsheet is usually extracted, making it static for a point in time. Planning on real-time data may not be possible.
  • Sharing the model becomes difficult with other users.

With the traditional spreadsheet model, as the complexity goes up, it becomes more and more personalized for the analyst who created the model, making it difficult to maintain common planning scenarios within the business.

 

1.  Bottom-up Planning Tools

value-driver-trees-traditional-planning-approaches

 

Enterprise-based approaches usually involve the use of bottom-up planning tools such as SAP BPC, SAP BW Integrated Planning etc. These tools are quite robust for enterprise-wide planning and what-if scenarios. However, they come with their own drawbacks, such as:

  • Requires specialized knowledge or expertise to use and cannot be easily adopted by business users.
  • Requires maintenance to scale up business models due to the use of specialized planning objects such as sequences or functions to run the analyses.
  • The process is repetitive – data needs to be selected, processed and stored back for different planning steps or simulations.
  • The process is slow and resource-intensive due to the complexity involved in using the tools.

With the bottom-up planning tool approach, the process is either too slow or too complex. Therefore users quickly adopt spreadsheets to run ad-hoc analyses. As a result, this ends up becoming a hybrid approach over time where there is a mix of bottom-up planning and spreadsheet-based planning.

 

Can a Value Driver Tree help with these drawbacks?

value-driver-trees-traditional-planning-approaches

 

Apart from the gaps mentioned above, one common gap for both traditional approaches to planning is that they both lack the capability to represent the planning model visually. It becomes easier for end users to comprehend and plan based on a business model when the relationship between different drivers and KPIs can be represented visually.

Visual BI’s Value Driver Tree [VDT] has the capability to visually represent these relationships in a tree structure while also offering plenty of options for analysts to map cross-dependent relationships in the model. Here are more ways that VDT can address the gaps discussed above:

–      Model Complexity, Scaling and Performance

Visual BI’s Value Driver Tree has a simple configuration technique that can be done directly within the tool, using a configuration spreadsheet or directly from a data source such as a BW Query or a HANA View. An in-built JavaScript engine ensures that all calculations and simulations are executed almost instantly on the front-end, avoiding a back-end round trip.

–      Dynamic Models and Data Integration

Visual BI’s Value Driver Tree offers the ability to connect to live Data Sources such as BW Queries or HANA Views. Single Data Sources can be connected directly, while multiple data sources can be merged using an added component. Tree Structures can also be derived directly from a Data Source.

–      Sharing Value Driver Models between Users

Any application created for the Value Driver Tree can be published on to the BusinessObjects platform and shared with other users easily. Since Data Sources can be used to drive the tree, a single version of the truth can also be ensured.

–      Requirement for Specialized Knowledge

Configuring a Value Driver Tree can be done easily using intuitive options as part of the component (within SAP Lumira Designer) or using a configuration spreadsheet. Once configured, using the tree as options at runtime are also very intuitive.

–      Requirement for Excessive Maintenance

Being a component that runs on SAP Lumira Designer, Visual BI’s Value Driver Tree can be easily modified using the options available in the tool, or even completely re-configured almost instantly using a configuration spreadsheet. There is no requirement for planning functions or sequences to implement or scale up planning and what-if scenarios.

Apart from the above-mentioned methods that mitigate some of the common drawbacks, Visual BI’s Value Driver Tree is extremely versatile – it can be used in several different scenarios. You can find some of these use cases here.

 

Does the Value Driver Tree replace my traditional planning models?

While the Value Driver Tree can be used as a standalone tool for planning, it does not have to necessarily replace your existing approaches altogether – the Value Driver Tree can be used to complement the existing bottom-up planning tools being used.

The Value Driver Tree has the capability to read data from SAP BW systems through SAP Lumira Designer. But in addition, it can also be used to write data back into a BW system using a RESTful web service and ABAP scripts, for instance. The tool comes with the ability to call these RESTful services for a write-back to write data back into, say, a planning cube that is being used for BW-IP.

value-driver-trees-traditional-planning-approaches

 

The Value Driver Tree as a tool can deliver an efficient planning and forecasting application which is quick, intuitive and agile. It can run simulations and calculations almost instantaneously and is mobile capable as well. However, thanks to the capabilities to leverage existing data sources through SAP Lumira Designer, as well as the ability to leverage modules to write data back to these data sources, it becomes a robust tool that complements existing planning solutions.

In upcoming blogs, we will discuss more topics about how customers can get more out of Value Driver Trees, and best practices and recommendations to implement the tool successfully for enterprise planning scenarios.

Subscribe to our Newsletter

The post Value Driver Trees or Traditional Planning Approaches? appeared first on Visual BI Solutions.

SAP Lumira – Why JAVA Script Compression?

$
0
0

This blog is an extension of my previous blog about enabling JAVA script compression in Business Objects for SAP Design Studio performance. Since Lumira 2.0 is a directly compatible upgrade for SAP Design Studio, JAVA script compression is also applicable for SAP Lumira. In this blog, we will share some information on how JAVA script compression improves the application load time.

For the sake of demonstration, a sample application has been selected and executed with and without JAVA script compression. The application load times for the same have been recorded and following are the inferences:

JAVA Script Compression Disabled

When JAVA script compression is disabled, JS minification won’t happen. So the JS files related to Lumira are bigger in size and take more time to load. Most of the resources needed for Lumira rendering like CSS, JS libraries and images load in parallel over the network. However, if the size of the components is huge, it will take more time to complete the overall application rendering.

Application Profiling Stats

sap-lumira–why-java-script-compression

 

Browser Stats – Load time & Size

sap-lumira–why-java-script-compression

 

Browser Stats – Size of Core Files

sap-lumira–why-java-script-compression

 

JAVA Script Compression Enabled

When JAVA script compression is enabled, JS files will be minified and their size will be reduced. When the size is reduced it take less time to transfer them over the network.

 

Application Profiling Stats

sap-lumira–why-java-script-compression

 

Browser Stats – Load time & Size

sap-lumira–why-java-script-compression

 

Browser Stats – Size of Core Files

sap-lumira–why-java-script-compression

 

Based on the statistics, the following are the key observations:

From the statistics, it can be observed that the core JS file alone got compressed by 1/4th of its original size. Similarly, other JS files also got compressed which resulted in an overall size reduction from 7.8 MB to 2.3 MB. Even though the server side profiling time is almost similar for both runs, there is a significant improvement in application rendering time due to reduced file size.

As an added information, JAVA script compression is enabled by default in most releases of BI 4.2 and later releases of BI 4.1. However, this parameter might be missing in some systems and might get reverted after system upgrades and patches. So it is recommended to check this after every patch/support pack/upgrade.

Subscribe to our Newsletter

The post SAP Lumira – Why JAVA Script Compression? appeared first on Visual BI Solutions.

SAP Lumira – Infrastructure Considerations

$
0
0

SAP Lumira is one of the best visualization tools available in the market for analytics and it is the tool of choice for analyzing data from SAP backend systems like SAP BW and SAP HANA. Building an optimized Lumira application demands a good amount of effort and if the infrastructure is not set up properly, even a properly designed application will not perform as expected. And in some cases, even a properly sized Lumira/Business Objects server might not deliver the results if the supporting infrastructure is not set up properly. This blog will talk about some of the infrastructure considerations to be made for the better performance of Lumira applications.

 

Some of the typical issues faced are:

Slow loading performance, slower refresh and user interaction times also become dragged leading to a less than optimal performance. Another thing to note is that these issues can also be due to bad application development and in this blog, we have also talked about how to identify if the issue is due to bad design or a suboptimal configuration.

sap-lumira–infrastructure-considerations

 

Server Sizing

Server sizing is one of the most important and fundamental factors for SAP Lumira performance. The Lumira service demands a good amount of resources for ideal performance and the server should be sized properly to handle Lumira requests. Each service within Business Objects should be provided sufficient amount of memory. Since Lumira is a live application, it hits the backend system for each refresh. So the backend system should also have enough resources to handle the load. If the server sizing is not sufficient in Business Objects (or) the backend systems, a properly designed application might not perform as expected.

 

Hardware

Hardware plays a critical role in the performance of SAP Lumira applications. Business Objects servers hosting Lumira should have current or close to current generation hardware for better performance. Every aspect of hardware like CPU, RAM and Disk should have good enterprise level specifications for better performance. Even a properly sized server running in old generation hardware might not provide expected results. Reason being that the older generation hardware does not support all the new hardware acceleration options provided by current generation systems. This does not mean that you need the latest CPUs and the latest RAM to run your servers, it’s only that this is an often neglected fact that might derail your Business Objects rollout.

 

Virtualization

Business Objects hosting Lumira server should be installed on top of supported virtualization technology and it should be updated to the latest version for better performance. Using an old version of Virtualization software on the latest hardware (or) installing the latest version of a guest OS on an old version of Virtualization host might not yield expected results. This will make the BusinessObjects and Lumira installed on these guests underperformance-bottlenecks.

 

Network Performance

The network plays a huge role in the performance of SAP Lumira applications. More often, the network is the major factor where a properly designed Lumira application will still fall short. When observed, the in-built profiling option might show a different time as compared to the time observed with a stopwatch for a Lumira application. This happens when the Network performance is not good. This factor often impacts the first impression that a user gets from the application and hence, is critical because only during the first load of the JAVA Script files, are the CSS and images needed for the Lumira application are downloaded from the server. For consecutive loads, these files will be loaded from the browser cache where network performance becomes immaterial. So network performance is very important for Lumira performance. You can use options such as JAVA Script compression to allow for slower networks as explained in this blog.

 

Statistics from a Sample Application

Profiling Info

sap-lumira–infrastructure-considerations

 

Time reported in Browser

sap-lumira–infrastructure-considerations

 

The time taken for the application processing is only 3.6 Seconds, however, it took 11.79 Seconds to render the application in a browser. The rendering time will be much more on a low performing network. So network performance and load, play a huge role in Lumira performance.

 

Location of Servers

Location of servers has become an important factor in recent years for SAP Lumira and Business Objects performance in general. With an increased move to the cloud and hosted environments, servers are located in diverse geographies and for an application like Lumira which hits the backend for every refresh and downloads multiple files to each clients system for rendering, this will be a critical factor. Communication between client and server which happened over the local network happens through the internet when servers are remote. So servers should be placed in nearest region possible and proper networking and routing protocols should be in place for better performance.

The same applies to Mobile and WiFi connections where we have had users experience issues due to poor network connectivity within their environment (office/shop floor etc )

In many cases, it is easy for a situation to become a  stalemate with the developers and administrators pointing to each other for possible causes and we hope that these tools and pointers will lead to more transparency towards application performance.

Subscribe to our Newsletter

The post SAP Lumira – Infrastructure Considerations appeared first on Visual BI Solutions.

Secure Integration of Business Objects with your Landscape – A Study of SSO Options

$
0
0

In today’s heterogeneous BI world, Single Sign-on (SSO) is something that has become the foremost need in everyone’s minds. This is primarily to ensure user-adoption and also, to reduce IT-related bottlenecks that come with users having to remember multiple passwords, managing security etc.

Earlier, Single Sign-On (SSO) was an afterthought mainly because the Business did not know the shelf life of a product in their enterprise. Now with products being tightly coupled to an ecosystem or platform, it is easier to bring a new tool in and harder to take it out.

Business Objects being a dominant and widely used analytics platform offers multiple Single Sign-on (SSO) options and in most cases, Single Sign-On (SSO) is default in Kerberos, which is what most people have implemented and it has a wide knowledge base, too. In this blog, we are going to look at how Kerberos is slowly becoming outdated and the other challenges that organizations face. We will also touch upon other security protocols that you can implement right now to ensure that your Business Objects stays current to the enterprise security needs.

Where does Kerberos come up Short?

Being a proven and stable technology for decades, Kerberos is falling short on many fronts in today’s world due to multiple reasons, like the following:

  • Bring Your Own Devices (BYOD) – When users use their own devices, it is difficult for the domain controllers to send Kerberos tickets to the user systems. It needs additional software and complex configurations to securely send tickets to devices which are not part of the corporate domain.
  • Diverse Operating Systems – With the increased adoption of different operating systems and devices, Kerberos ticket is not a viable option anymore as some of them do not support Kerberos out of the box.
  • Browser Support – Kerberos is not supported in most of the browsers out of the box (of course, there are workarounds) and some browsers need configuration changes to work with Kerberos. Such changes make it difficult for a normal end user and also become an IT nightmare considering versions, compatibility, testing, and rollout, that have come to be expected of any Software.
  • Complex Setup – Kerberos setup in Business Objects is a little complex as Business objects is a JAVA based application. More often, Kerberos setup runs into issues and needs a good amount of debugging effort.

 

With the advent of Identity providers and authentication management, there is a need for alternate Single Sign-on (SSO) options which are more flexible, supports diverse platforms, adheres to latest standards and offers a seamless experience for the users.

 

secure-integration-businessobjects-with-your-landscape-a-study-SSO-options

Subscribe to our Newsletter

The post Secure Integration of Business Objects with your Landscape – A Study of SSO Options appeared first on Visual BI Solutions.


SAP Business Objects SSO – Trusted Authentication

$
0
0

Trusted Authentication is native to Business Objects and it is part of the “Enterprise” authentication plugin in Business Objects. As the name implies, Trusted Authentication is purely based on trust and it involves sharing of confidential information with third-party systems (or) the web application servers. SAML and X.509 Single sign-on methods in Business Objects are an extension of the “Trusted” authentication, and Trusted Authentication can be extended to be used with various other methods. It also provides the foundation for integration with custom single sign-on solutions, which are homegrown.

In Business Objects, some of the advanced single sign-on methods like X.509 and SAML are extensions of “Trusted” authentication. Trusted authentication supports multiple formats for the retrieval of user accounts and can be implemented in different ways.

sap-businessobjects-SSO-trusted-authentication

In the case of Business Objects, the “Enterprise” authentication plugin available within Business Objects allows generation of secret keys. This secret key will be shared with the trusted third party application (or) web application server. Authentication of the user will be delegated to the third party application and once authenticated, it will pass the user information on to Business objects along with the shared secret. Business Objects will then provide a session for the specific user.

The third party application which performs the actual authentication of a user will update the request from the user and will add additional properties to the request which will have the user information. Business objects will provide only session management for that user if the shared secret is correct.

 

Supported Methods

Business objects support the following methods for user retrieval from a trusted server:

  • HTTP Header
  • Query String
  • Cookie
  • Web Session
  • Remote User
  • User Principal

 

The external trusted source will update the user information and send it through one of these methods:

HTTP Header

HTTP headers are components used in requests and responses in the header section of the HTTP protocol. External authentication server will update the user name to a custom HTTP header and will forward it to Business Objects. Business objects will read the username from the custom header field and the user will be granted a session.

Query String

As the name implies, a query parameter will be added to the query string and will be forwarded to Business Objects. Business Objects will get the username from the query parameter and will grant a session to the user. As an example, a request URL like https://biserver.bi.com/BOE/BIwill be updated to https://biserver.bi.com/BOE/BI/?user=User01 and Business Objects will fetch the username from query parameter user

Caution: Both HTTP header and Query String methods should be used with extreme caution and users should not be allowed to access Business Objects directly, as the headers and query parameters can be directly manipulated by users. When these methods are in practice, requests should go through the authentication server and not directly to Business Objects.

Cookie

The trusted authentication server can authenticate the user and provide a cookie for the user. Business objects will read the user information from the cookie.

Web Session

An external authentication server will authenticate the request and provide a web session. SAML authentication is a typical example of Web Session trusted authentication.

Remote User & User Principal

Remote User and User Principal methods get the user information from servlet functions. These methods are often used with X509 certs, Vintela authentication, and similar methods.

Trusted authentication gives the flexibility to re-use existing technologies and make Business Objects inter-operable with other tools. Trusted authentication does not expect a user to have an Enterprise alias and authenticates any user in the system irrespective of their source.

Stay tuned for more blogs on Business objects SSO.

Subscribe to our Newsletter

The post SAP Business Objects SSO – Trusted Authentication appeared first on Visual BI Solutions.

ETL on Azure: Databricks vs Data Lake Analytics

$
0
0

Overview

Data Extraction, Transformation and Loading (ETL) is fundamental for the success of enterprise data solutions. The process must be reliable and efficient with the ability to scale with the enterprise.

There are numerous tools offered by Microsoft for the purpose of ETL, however, in Azure, Databricks and Data Lake Analytics (ADLA) stand out as the popular tools of choice by Enterprises looking for scalable ETL on the cloud.

This blog helps us understand the differences between ADLA and Databricks, where you can use them and how to decide on which one to choose for your type of data/business.

Working through a simple Scenario

Here we are considering a typical ETL scenario. We have taken two of the most popular Data Sources that organizations use, the Azure SQL DB and Data Lake. We have unprocessed data available in the Azure SQL DB that requires to be transformed and written to the Azure Data Lake Store repository.

etl-on-azure-databricks-vs-data-lake-analytics

 

A look at Sample Data and its ETL requirements:

Data Source: Azure SQL Database

etl-on-azure-databricks-vs-data-lake-analytics

 

We need the below steps to be performed on the data for it to be in its right format when loaded into the Data Lake Store

  1. Add two digits after decimal point in columns “CostPrice” and “SellingPrice
  2. Convert “DateTime” to “Date” format on the columns “InventoryDate” and “PurchasedDate
  3. Introduce a new column which provides the “BenchTime” of each transaction (date difference)
  4. Define a column which calculates the profit earned in “Percentage
  5. Add left padded zeroes to “ProductID” column to make it three digits (Example: 005, 014)

Let’s take a detailed look into the above operations that can be done in both Data Lake Analytics and in Azure Data Bricks.

Azure Data Lake Analytics (ADLA)

Data Lake Analytics is a distributed computing resource, which uses its strong U-SQL language to assist in carrying out complex transformations and loading the data in Azure/Non-Azure databases and file systems. Data Lake Analytics combines the power of distributed processing with ease of SQL like language, which makes it a choice for Ad-hoc data processing.

Demo with Azure Data Lake Analytics:

Transformation:

etl-on-azure-databricks-vs-data-lake-analytics

 

U-SQL job:

etl-on-azure-databricks-vs-data-lake-analytics

 

Transformed Data on Azure Data Lake Store:

etl-on-azure-databricks-vs-data-lake-analytics

 

Configuration: 5 Analytics Unit

Language Used: U-SQL

 

Cost:

etl-on-azure-databricks-vs-data-lake-analytics

 

Overall Time: 1 Minute 07 seconds

 

What we liked:

  • Distributed processing holds the ETL high
  • Seamless Transformation
  • Less read/write latency
  • Costs based on Jobs, not on the size of data

 

Limitations:

  • Job compilation errors are time-consuming
  • Very limited library modules

 

Preferred Use Cases:

  • For a large amount of data where conversion and loading are the only actions required
  • Process data from Relational databases into Azure
  • Repetitive loads where there is no intermediary action required

Azure Databricks

Azure Databricks is a Notebook type resource which allows setting up of high-performance clusters which perform computing using its in-memory architecture. Users can choose from a wide variety of programming languages and use their most favorite libraries to perform transformations, data type conversions and modeling. Additionally, Databricks also comes with infinite API connectivity options, which enables connection to various data sources that include SQL/No-SQL/File systems and a lot more.

Demo with Azure Databricks:

Connecting to Azure SQL DB:

etl-on-azure-databricks-vs-data-lake-analytics

 

Transformed Data on Azure Databricks:

etl-on-azure-databricks-vs-data-lake-analytics

 

Configuration: Standard_F4s (1 Main Node and 2 Worker Nodes with a total of 8 GB Memory)

Language Used: Scala

Cost:

etl-on-azure-databricks-vs-data-lake-analytics

 

Overall Time: 5 Minutes 34 seconds

 

What we liked:

  • Spark Framework driving Big Data Analytics
  • User-friendly “Cell-based data processing”
  • Language choice for developers
  • Infinite libraries available based on the scripting language chosen
  • Autoscaling

 

Limitation:

  • Cluster’s time efficiency

 

Preferred Use Cases:

  • Processes where intermediary analysis of data is required
  • ETL which requires more visibility during modeling and transformation of data

 

Stacking up Azure Data Lake Analytics against Databricks:

Feature Azure Data Lake Analytics Azure Databricks
Cost Control Pay-As-you-go Manual/ Auto-Terminate Clusters
Development Tool IDE + SDK Based (U-SQL supported) Notebook type
Payment Per Job Cluster Properties, time duration and Workload
Scaling Auto-Scaling based on data

(Dynamic)

Auto-Scaling for jobs running on cluster (Runtime 3.4 & above)
Data Storage Internal Database available DBFS (Database File System)

Direct Access (Storage)

Manage Usage Portal (Preferred)

Azure SDK:

·       Python

·       Java

·       Node.js

·       .NET

Spark Framework:

Scala, Java, R and Python

Spark SQL

 

Monitoring Jobs Azure Portal

Visual Studio

Within Databricks

 

Managing Resource Azure Portal

Azure CLI

Azure PowerShell

Visual Studio

Visual Studio Code

Azure Portal

Within Databricks

Databricks CLI

Visual Studio Code

 

Connectivity to Data Lake Store Directly using Data Lake Store’s URL path 1.Register a Web app /API (Service principal)

2.Associate Service principal with the ADLS storage path

3. Use Application Id, Key and Tenant ID (Directory ID) to connect to Data Lake store

Connectivity to Resource Azure Portal

Azure CLI

Azure PowerShell

Visual Studio

Visual Studio Code

Azure Portal

Excel, SQLODBC

Commonly used Data Sources Azure Data Lake Store

Azure Blob Storage

Azure SQL DB

Azure SQL DW

 

Azure SQL DB (JDBC)

Azure SQL DW

Azure Data Lake Store

Azure Blob Storage

Azure Cosmos DB (Spark connector)

Event Hubs (Spark connector)

Hive Tables

Parquet/Avro/CSV/JSON

Functionalities Scheduling Jobs

Inducing in Data Factory Pipelines (U-SQL scripts)

Scheduling Jobs

Inducing in Data Factory Pipelines (Databricks Notebook)

Conclusion

From our simple example, we identified that Data Lake Analytics is more efficient when performing transformations and load operations by using runtime processing and distributed operations. On the other hand, Databricks has rich visibility using a step by step process that leads to more accurate transformations. Efficiency, accuracy and scalability, should all be considered while making your decision about the right tool choice for your ETL process.

Let us know your thoughts and feedback about working with these two totally different resources that perform ETL at the same level.

References: **Pricing obtained from Azure Pricing website

Subscribe to our Newsletter

The post ETL on Azure: Databricks vs Data Lake Analytics appeared first on Visual BI Solutions.

Single Sign On (SSO) in SAP Analytics Cloud using SAML

$
0
0

SAP Analytics Cloud (SAC) is SAP’s a cloud-based analytics solution and it is also part of the SAP cloud platform. SAC offers connectivity with SAP BW, SAP HANA, SAP Universes and several other data sources and cloud systems. Like any other analytics/reporting tool, when dealing with such diverse data sources and with support for a hybrid model, Single Sign-on (SSO) becomes an important requirement for SAC as well. However, due to its cloud-only nature, traditional Single Sign-on options can’t be used in SAC. However, SAC offers an easy SAML setup interface which greatly simplifies setup and administration of the SSO.

SAC comes preconfigured with SAP’s cloud identity provider and new users signing up into the system are given self-service registration options. If the organization already has a cloud identity provider preconfigured, it can be re-used with SAC. SAC can also be configured to make use of corporate IdP (Identity Providers) and leverage the existing setup.

Requirements:

To setup Single Sign-on for SAC systems, the following are the requirements

  1. A SAML Identity Provider
  2. Server Provider (This will be SAC in this case)
  3. System owner account

 

Workflow:

  1. A user launches SAC URL in the browser. SAC is the service provider that will redirect to the Identity Provider.
  2. Identity Provider will verify the user’s credential with the director. Credentials can be obtained through different formats like login page, Smart Card, X.509 cert etc., based on the IdP being used.
  3. Once the credentials are verified by the IdP, a SAML session will be sent to the SAC system. Once the session is validated by SAC, the user will be logged in.

 

sap-analytics-cloud-single-sign-using-saml

 

Setup

Setting up SSO against custom IdP is fairly simple in SAC. Login into SAC with the “system owner” account and go to System -> Administration -> Security and switch to “Edit” mode

*Note: this is the System Owner account in SAC

  1. Select the “SAML Single Sign-on (SSO)” option under “Authentication” option
  2. Download the Service Provider Metadata from SAC. Upload this metadata to IdP and setup required assertions
  3. Upload the IdP metadata into the SAC system. Once uploaded, the SAC system will show the IdP metadata expiry date
  4. Choose a user attribute. User and Email options are available by default. Custom attribute mapping is also available
  5. Then verify the setup with a user account

Constraints

SAC is case sensitive in verifying the SAML attribute. If the UserID (or) Email attribute is stored in case sensitive format in the IdP, then it should be stored in SAC with the exact same case. For an instance, if the user id is sent by SAML IdP as User01, then it should be created in SAC with the same case (User01). Else, SAML SSO will fail.

 

Renewing Identity Provider Metadata

Identity provider certificates have their own validity and when we update those certificates the metadata will also change. When this happens, SAML SSO to SAC will fail and there are no fall back options available as of now. We have to reach out to SAP support to update the new metadata in the SAC system. However, this can be prevented by following one of the steps mentioned below

  1. Switch to default cloud identity provider before SAML metadata update and then upload the new SAML metadata once the certificate is renewed
  2. Login to SAC before metadata update and update IdP metadata after that. Then upload the updated metadata into the SAC system

 

Stay tuned for more blogs on Single Sign-on options.

Subscribe to our Newsletter

The post Single Sign On (SSO) in SAP Analytics Cloud using SAML appeared first on Visual BI Solutions.

SAP Business Objects SSO – X.509 Authentication

$
0
0

After we discussed the various Single Sign-on options that you can use for secure Integration of Business Objects with your Landscape in our previous blog, we dwell deep into each one of those options.

X.509 is a standard defining the format of public key certificates and is used in internet protocols. X.509 also offers an authentication mechanism where each user has his own X.509 cert signed by a certificate authority and that can be used as a basis to validate their identity. Business Objects supports X.509 Single Sign-on mechanism out of the box. X.509 authentication and single Sign-on in Business Objects is an extension of “Trusted” authentication offered as part of the “Enterprise” authentication plugin. This method is little generic and implementation might vary based on the certificate provider and web application server used. Business Objects has in-built options to read users information from the certificate and authenticate the user.

 

Trusted authentication

As the name implies, this authentication is purely based on trust. “Enterprise” authentication plugin available within Business Objects allows the generation of secret keys. This secret key will be shared with the trusted third party application (or) web application server. Authentication of the user will be delegated to the third party application and once authenticated, it will pass the user information to Business Objects along with the shared secret. Business Objects will then provide a session for the specific user. Refer to this blog for more on Trusted Authentication.

sap-businessobjects-SSO-trusted-authentication

Sample Workflow – Trusted Authentication

 

X.509 Authentication

X.509 authentication on Business objects is set up on top of “Trusted” authentication. The job of verifying the certificates can be delegated to the tomcat instance that comes with Business Objects or an external system. Since this is a generic method, this can be achieved in multiple ways. On a high level, the following are the requirements for X.509 certificate authentication and Single Sign-on:

  1. Certificate authority – Signing and revoking user certificates
  2. Trusted authentication enabled in Business Objects
  3. Tomcat (or) webapp server should be configured for HTTPS with the certificate authority root certificate in the trust store of the web app server
sap-business-objects-sso-x-509-authentication

Workflow with Tomcat

 

When an external authentication server is not used, tomcat will perform both the tasks of verifying the certificate and passing the user information to the Business Objects system. When a user tries to log in, tomcat will challenge for X.509 cert and users will have the certificate in their user store accessible for the browser. Once the certificate is shared, tomcat will verify if the certificate is from a trusted authority. Once verified, it will extract the username from certificates CN part and send it to Business Objects along with the secret. Once Business Objects verifies the information, the user will be granted with a session. This will be seamless for the user as he will not be challenged with a login page and browser will share the certificate with the server on his behalf.

 

Workflow with External Authentication Server

In this method, the task of verifying the certificate is delegated to an external system and once verified, the request will be sent to tomcat. This external system could be a reverse proxy (or) another web app server (or) homegrown custom authentication method and often this is done to maintain compatibility with other applications. Since the certificate verification methods natively available in tomcat are very limited and do not involve enhanced functions, it is advisable to delegate the verification part to the external system. Once the certificate is verified, the request will be updated with an HTTP header (or) a query string (or) a web session and will be sent to tomcat. Tomcat will now read the updated information like header (or) query string and fetch the user information from it. Once the user information is obtained, it will be sent to Business Objects along with shared secret and a session will be obtained.

sap-business-objects-sso-x-509-authentication

 

Advantages of X509 authentication and SSO

  • 509 certificates are generic and platform independent
  • Re-use of existing infrastructure which is configured for other systems
  • Easy to set up for users who are out of the network
  • SSO can be configured for BOE systems in a different domain (hosted systems)

 

Stay tuned for more information about alternate Single Sign-on methods available for Business Objects.

Subscribe to our Newsletter

The post SAP Business Objects SSO – X.509 Authentication appeared first on Visual BI Solutions.

SAP Business Objects SSO – SAML- Part 01

$
0
0

After we discussed the various Single Sign-on options that you can use for secure Integration of Business Objects with your Landscape in our previous blog, we dwell deep into each one of those options.

Security Assertion Markup Language which is abbreviated as SAML is an open standard XML based Single Sign-on (SSO) protocol and has become the de-facto industry standard for Single Sign-on requirements. SAML is a web browser based SSO protocol that is not dependent on any Operating system (or) device types. It is a trust-based protocol and can be used to perform Single Sign-on with systems which are not part of the same domain. SAML setup in Business Objects is fairly simple and it is an option supported out of the box in latest releases.

SAML was predominantly created to allow for system trust establishment across the landscape with the expectation that it would be generic enough for most systems to understand. Today, SAML is seen as the best option for connecting your current on-premise landscape.

SAML setup in Business Objects is an extension of the “Trusted” authentication plugin which is part of the “Enterprise” authentication plugin. SAML setup does not depend on the source where the user account is imported from. Even though user accounts are imported from multiple sources like Windows AD, LDAP, SAP and created locally with Enterprise plugin, SAML SSO will still work without an issue.

 

Trusted authentication

As the name implies, this authentication is purely based on trust. “Enterprise” authentication plugin available within Business Objects allows the generation of secret keys. This secret key will be shared with the trusted third party application (or) web application server. Authentication of the user will be delegated to the third party application and once authenticated, it will pass the user information to Business Objects along with the shared secret. Business Objects will then provide a session for the specific user. Refer to this blog for more on Trusted Authentication.

sap-businessobjects-SSO-trusted-authentication

Sample Workflow – Trusted Authentication

 

SAML Setup – Requirements

SAML Setup in Business Objects is supported out of the box in the latest releases. A typical SAML setup needs a Service Provider (SP) and an Identity Provider (IdP). In this case, Tomcat web application server will act as the Service provider and IdP can be any system available in the Landscape.

System specific requirements:

  1. Business Objects 4.2 SP05 and above
  2. Trusted authentication should be enabled in Business Objects.
  3. Tomcat configured as a Service Provider (with SAML libraries provided by SAP)
  4. An Identity Provider (IdP)

 

sap-businessobjects-sso-saml-part-01

SAML Authentication – Workflow

 

In a SAML enabled system, authentication workflow functions in the same order as mentioned below:

  1. The user launches the Business Objects URL and request reaches tomcat server
  2. Tomcat, which acts as the Service Provider, redirects the request to Identity Provider
  3. Identity Provider will authenticate the user and will verify the credentials with the connected directory server
  4. Once the user is authenticated, IdP will return a SAML session
  5. The user name will be obtained from the session and will be sent to Business Objects along with the shared secret
  6. Business Objects system will then provide a session for the user

 

Advantages

  • Existing SAML setup can be re-used
  • SAML SSO can authenticate users imported from all sources
  • Inter-operability with available portals and Single Sign-on systems is easier
  • Cross-domain single Sign-on is possible and makes SSO setup with servers on hosted environments easier

In the next blog of this series, we explore options to setup SAML for Business Objects systems that do not support SAML out of the box.

Subscribe to our Newsletter

The post SAP Business Objects SSO – SAML- Part 01 appeared first on Visual BI Solutions.

6 Tips & Tricks For Better Tableau Performance

$
0
0

It is not uncommon for users to face performance issues while consuming dashboards deployed in Tableau Server. This may be experienced in the form of longer load times and/or longer interaction response times. In this blog we will be looking at a few ways to optimize performance of Tableau dashboards.

1. Set the dashboard layout to a fixed size
When creating Tableau dashboards, avoid choosing automatic size for dashboard rendering.This deteriorates performance as the dashboard has to render itself according to resolution of the end user device, which may vary.

6 Tips & Tricks to make your Tableau dashboard load & run faster

Instead set the dashboard to a fixed size or any of the predefined screen sizes that is going to be compatible with the screen resolutions of most users. This ensures that there are no run-time resolution adjustments during consumption, and your dashboards will load faster.

2. Try to minimize mark points
Mark points refer to each object or value showed in a dashboard. A dashboard with high mark points (for E.g. a Tabular report) takes longer time to load, especially with a live connection.

6 Tips & Tricks to make your Tableau dashboard load & run faster

You can enhance performance by utilizing charts to represent the same data. If you have a chart with many mark points, you may also restrict the charts by limiting the display to TOP/BOTTOM N dimension Values.

3. Minimize usage of Quick Filters
Avoid using too many quick filters in a dashboard. Let’s say you have 5 filters in your dashboard. Every time a filter is applied, Tableau sends a query to the data source to retrieve data points relevant to filter applied. Therefore, applying five different quick filters leads to Tableau sending query to data source five times which slows retrieval of data and display of Dashboards with filtered data.

 

6 Tips & Tricks to make your Tableau dashboard load & run faster

Use Dashboard Actions instead of Quick Filters for better performance. This helps in faster rendering of Dashboard and enhances its look and feel. Use Quick filters only for the dimension or measures that are not used in the Dashboard charts

4. Minimize the number of charts used
Having many charts jumbled into a single dashboard directly leads to increased rendering and layout computing time. Especially when filters are being applied, the filtered data must reflect in all the charts present in Dashboard

 

6 Tips & Tricks to make your Tableau dashboard load & run faster

The best practice is to use not more than four charts in a dashboard.

5. Retrieve only what is needed from the Data Source
In general, only fetch data that you need in the dashboard. For example, while using Extract connections, ensure that only relevant data is loaded into the Extract by utilizing appropriate Extract filters.Once dashboard is built in Tableau Desktop use the ‘hide unused columns’ option to remove columns from extract that are not used for building Dashboards – this reduces size of the extract and enhances performance.

6. Optimize your backend configuration (hardware & software)Ensure robust hardware, memory & storage for your backend/database to support faster data retrieval. This is especially more important for Live connections. The columns that are used frequently while building visualizations or as filters can be indexed in the database for better performance (Note: Indexed column should not be used as context filter)

Finally, avoid using Tableau Desktop and Server in the same machine as it may lead to performance issues unless there is enough hardware present to balance the load.

* * *

Learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post 6 Tips & Tricks For Better Tableau Performance appeared first on Visual BI Solutions.

Visual BI at BI + Analytics Conference 2019

$
0
0

Visual BI Solutions, an SAP Silver Partner, will be participating in the BI + Analytics Conference, co-located with SAP-Centric Financials, being held this year at Dallas, TX between March 11-13.

We will be exhibiting at Booth #9.

This year, we will be showcasing our End-to-End BI capabilities ranging from Quick-Start Programs, Migration, Training, Consulting and SAP Certified Product Extensions that more than 100+ leading global companies leverage. In addition, participants will also have a chance to explore our innovative Product offerings such as:

  • Visual BI Extensions (VBX) for SAP Lumira Designer- Advanced Custom Visualizations for Executives & Decision Makers
  • Value Driver Tree (VDT) for SAP Lumira Designer- For Dynamic Planning, Modeling & Simulation
  • VBI View- The Only BI Portal you Need to Manage Multiple BI Platforms Effectively

Visual BI will also be demoing some unique SAP Analytics Cloud platform based solutions.

 

About BI & Analytics Conference:

The BI+Analytics Conference is North America’s go-to event for today’s best SAP strategies and technologies for business intelligence and analytics. From sourcing and selecting the right BI tools for your company to implementing and optimizing, BI+Analytics highlights customer success stories across numerous industries and features best practices for leveraging SAP’s full range of analytics and reporting tools.

Subscribe to our Newsletter

The post Visual BI at BI + Analytics Conference 2019 appeared first on Visual BI Solutions.


6 Best Practices for Efficient Tableau Server Performance

$
0
0

Tableau Server delivers powerful capabilities through processes that govern extract refreshes, database connections, workbooks, data sources etc. Since so many processes are involved, it is recommended to optimize Tableau Server for better performance. In this blog we will be looking at some of the steps that help you optimize Tableau Server’s performance.

1. Using Published Data Sources (PDS)

Instead of connecting to databases every time a workbook is created, try to leverage published data sources to supply information to multiple workbooks. Leveraging published data sources provides the following benefits:

  • Being a single source of Data for multiple workbooks PDS saves lot of space in Tableau Server whereas in case of workbook embedded data sources space is consumed based on volume of data in each workbook
  • Processing load is optimized in the server when multiple workbooks are used
  • Creating formula and calculations for the Data is a onetime activity in Tableau Desktop. These modifications are saved in Published Data Source therefore need not be re-created every time a new workbook is developed
  • Data refresh process & orchestration becomes simpler as data refresh of a PDS is reflected across all workbooks connected to it negating the need for setting multiple data refreshes for each workbook which may take lot of time
  • Easily connect to data source for creating new workbooks even without connection to Databases.

6 Best Practices for Efficient Tableau Server Performance

It is also essential to understand that modifications cannot be made to Published Data Sources. E.g. Editing formulae, creating new calculations etc.

2. Use Row Level Security (RLS)

Sometimes, developers create multiple versions of the same workbook catering to multiple user roles (e.g. Manager – Eastern Region should only see data pertaining to that region). They achieve this by creating filtered version of the data set catering to each role. However, this consumes lot of space in Tableau Server and hampers its performance. In addition, future changes to the dashboard need to be applied to all the versions thereby taking time & effort.

By utilizing Row Level Security (RLS) for a workbook different users can view the same dashboard that restricts the data as per their roles.

3. Schedule Data Refresh during Non-Business Hour

Scheduling data extract refreshes during office hours may take up longer time as multiple processes might be utilizing the databases and tables.

Therefore, it is a best practice to set data refresh schedules during non-business hours which ensures faster data refresh

4. Prioritize Schedule Refreshes

There may be many data refreshes pointing to different workbooks in a refresh schedule. Some of the data refreshes might involve huge amount of data that take lot of time to refresh data extracts while other data refreshes would have lesser data and therefore would take lesser time.

By prioritizing data refreshes, we can set the sequence in such a way that data refresh for the workbooks with fewer data is triggered first followed by the ones involving large amount of data. Therefore, we can access the reports with lesser data sooner.

6 Best Practices for Efficient Tableau Server Performance

5. Favor Incremental over Full Refresh whenever possible

Incremental Refresh appends new records to existing records in extracts whereas a full refresh deletes existing records loads old records along with the new records.

It is advisable to go for incremental refresh as it takes much lesser time when compared to a full refresh, unless there is a mandatory business requirement need to reload all the data.

6. Retain Data in Cache memory

While installing Tableau Server, under Data Connections Tab the user is prompted to choose method of handling cache.

a) Refresh Less Often – Data from source is cached. Subsequently every time the report is accessed data from cache is displayed. This is done to reduce the load on Tableau Server by not sending query to database every time report is accessed thereby improving performance. This option is best to be used for data that changes less often. Latest Data is reflected in the report only when the report is manually refreshed, or when Tableau Server is restarted.

b) Balanced – The user can specify the time up to which data is cached. Data is not held in the cache beyond the time specified.

c) Refresh More Often – In-case of live connection to data source, every time the report is accessed data refresh will take place in the background before displaying the report. In case of extract connection data from latest version of refreshed extract will be fetched. Although this helps us to view the latest data, there will be an additional load on Tableau Server to fetch data from the data source every time the report is viewed. This option can be chosen when new records are added to data source in very short intervals

These are some of the options to optimize your Tableau Server for better performance.

* * *

Learn more about Visual BI’s Tableau consulting & end user training programs here.

Subscribe to our Newsletter

The post 6 Best Practices for Efficient Tableau Server Performance appeared first on Visual BI Solutions.

3 Important Reasons Why Lumira Designer is the Ultimate Choice for a Design Thinker

$
0
0

It might be difficult to comprehend the power of SAP Lumira Designer (formerly known as BusinessObjects Studio) because the tool by itself is presumed to be not meant for end-users or business users. Also, its commonly considered an on-premise tool that may not be suitable for generating reports like WEBI or Crystal Reports.

This article will help you understand the true power of Lumira Designer from a bigger picture and how it supports the human design thinking process. What we should keep in mind is that, while most people primarily build dashboards with Lumira Designer/Design Studio, the full capacity of the tool lies beyond that. It is meant to build a BI application that can be dynamic and customizable according to the users’ workflow. With this flexibility, IT can come to Business and have a detailed conversation about their requirements – because they can offer a lot more than reports.

 

If you are a user-centric designer or a design thinker, here are the 3 reasons you will love Lumira Designer:

Control of User Experience

User experience is different from the user interface. While self-service analytics tools in the market offer a good user interface, it doesn’t necessarily provide the experience your users need. An essential component of this experience is User Workflow– the sequences of tasks performed by users to achieve their objectives. When given a new self-service BI tool such as Power BI or Tableau, users first need to learn about all the functionalities and then try to come up with what they need. A lot of excitement for BI and analytics is lost along this learning curve.

In addition to that, when efforts are spent on navigating the tool, users have less time and mental stamina to do their actual job: make an informed decision. With a design thinking mindset, the process needs to be the other way around: the tool follows the thinking process of users. Lumira Designer developers do not just build dashboards, but BI applications. These applications can be tailored to users’ customized step-by-step analytical process.

At the end of the day, we need to remember that users don’t need a self-service tool. Having a tool is not their ultimate goal, even though that’s what they would probably ask for from IT. That is because it will take time for users and designers to comprehend what they truly want. What they need is a way to discover insights from their data or to perform an analytical flow and facilitate their decision-making process. The role of IT here is to understand end-users’ workflow and make it as efficient as possible by designing navigation steps and screens within the application.

 

Quick Prototype

Iteration is a critical part of the design thinking methodology. This process requires an evolving prototype to generate feedback from users. Story-boarding and wireframes are a great start, but it is not until users can interact with the tool that they can give insightful and useful feedback. This feedback helps IT provide not only a practical and useful application but also a tool that sticks. Observing users’ interaction with the tool also helps designers learn about users’ habits and unspoken requirements. Most of the time users have developed certain analytical habits during their workflow, which are not easily communicated.

An interactive prototype is the best way for both users and designers to learn about their specific needs. Keeping this in mind, if you observe closely, SAP Lumira does not have competitors who can do this in the whole market. With its versatile components and simple scripting capabilities, designers can demonstrate an interaction in a matter of minutes and build a fully functioning BI application within days. While it is possible to create customized styling with CSS, the UI5 components in Designer are user-friendly enough, requiring no initial efforts for formatting from the users’ end. This helps designers and users focus on goals and functionalities first.

 

Self-Service Reporting Capabilities

Despite the hype caused in the market, self-service BI implementations have not been as successful as organizations expected. According to Logi Analytics’ State of Analytics Adoption Report in early 2017, even though access to self-service analytics tool has been growing rapidly, the adoption of these tools have decreased by 20%. On another hand, the report also pointed out another trend: the use of data discovery tools have peaked. What insight does this give us about the behavior of users? They like the idea of having the ability to extract just the data they need, create customized visualizations and share their views with other users. When Visual BI Solutions implements a self-service solution within a Lumira Designer, our customers get most excited about the following capabilities:

  • Exporting data to Excel
  • Data Discovery or Ad-hoc Analysis
  • Create and Save bookmarks and personalized views

 

In most cases, self-service analytics tools give the user more than what they need, along with a deep learning curve and the concerns of data integrity. What they need is some self-service capabilities, not self-service tools. With basic components and especially the online composition capability in Lumira Designer, IT can deliver just enough functionalities their users actually need. 

The tool is meant to be the bridge between IT and business and SAP has been consistently progressing its road map towards this direction. From a communication perspective, Lumira Designer gives IT an opportunity to sit down with business and talk about their self-service needs. This self-service experience is embedded in a reporting or analytics application authored by IT, where the user doesn’t have to worry about complicated formulas, scripting, and especially, data integrity and governance.

 

In conclusion, Lumira Designer offers the capabilities to accommodate our design process. Keep in mind that Design Thinking is a separate subject matter including a skillset that is not related to the expertise in the tool itself. The tool does take more time and effort to master. Figuring out the requirements is an essential part of arriving at meaningful functionalities. If we shift the conversation from being all about tools and its functionalities to an increased focus on users’ needs, Lumira Designer is an ultimate solution that can adapt to various scenarios, provided there are effective communications and design related workshops between IT and business.

Subscribe to our Newsletter

The post 3 Important Reasons Why Lumira Designer is the Ultimate Choice for a Design Thinker appeared first on Visual BI Solutions.

A User-Centric Approach to Choosing Self-Service BI Tool

$
0
0

In a recent article, we have laid out a comprehensive comparison of most popular self-service BI tools in the market. But before considering which tool to choose from based on its back-end connectivity and functionality, it is important for IT to understand business users’ problem and what they are trying to achieve with a self-service tool. Sometimes business users actively ask for a self-service tool because they think it is the solution to all their problems. But to implement a successful self-service BI, we need to focus on solving users’ problems first.

Understanding Users’ Mental Processes

Within the group we call ‘business users’, each individual might have a different mental process when it comes to BI and analytics. Our users’ mental process can be divided into two categories:

Creative thinking helps users to come up with the business questions, define the problem to solve or interpret it and draw conclusions from data visualizations. It also involves making a decision based on experiences and intuition. This process can be unique to everyone based on the level of experience. Creative thinking process also affects how users interpret and draw conclusions from data and visualizations.

Analytical thinking happens when users organize information, design a process, build an algorithm or a data model. These activities are most effective when there are goals and objectives defined (by creative thinking process) and will provide results as input and validation for the other mental processes.

a-user-centric-approach-choosing-self-service-bi-tool

 

How much of an analytical or intuitive/creative thinker each user is, depends on his or her personality or quite often, level of seniority. More senior and industry-experienced users tend to rely more on intuition, with supporting insights from data, while less experienced users rely more on the numbers and common practices. This is reasonable because creative thinking can be personal and can be biased, therefore, credible individuals are more trusted with this power to make a decision. Soon, analytical tasks will be performed more and more by machines, providing inputs for humans to make decisions based on creative thinking and intuition, which is the task we are better at and enjoy doing more.

Nevertheless, users’ mental resource, as well as time, is limited. The more analytical tasks users perform, the less creatively & intuitively they tend to think, and vice versa.

a-user-centric-approach-choosing-self-service-bi-tool

 

The implication is that BI solution needs to facilitate users thinking the process by providing just enough of the right functionalities. If you are looking at the self-service BI tools offered to business today, you can observe the irony in them, which is that the more functionalities a tool offers, the more time users will need to spend on the analytical thinking rather than on creative thinking. They become more focused on tools rather than business problems.

The art of user experience design is to give users just about the right tools to design just enough number of steps to solve their problems. While self-service BI by definition gives users the freedom to analyze data without the involvement of IT, in reality, we need to spend more efforts to thoroughly understand users’ need and to build an infrastructure to support that solution.

Why Users Need Self-Service Tool

When business users request for self-service BI access, we need to understand what users want to achieve with this solution. From our experience, here are the few scenarios that are often included in the ‘self-service’ solution. The problem is, users won’t tell you their problems up-front. If they think ‘self-service’ BI is the answer, they will try to push for the implementation. It is the IT or BI team’s responsibility to understand the users’ need thoroughly, as well as their available in-house talent for analytics to implement the right tool.

To Access Data

This is the most basic requirement, yet, still the most popular among organizations. Here are a few criteria for this request:

  • Ability to access data without going through IT
  • Ability to access real-time data, anytime, anywhere (Ex.: for field sales representative)
  • Ability to access data on multiple platforms
  • Automation of data flow
  • Data, readily modeled to answered specific questions

Users do not want to create their own charts or reports, instead, they simply want answers like ‘How much have we sold- year to date’, ‘What are our top selling products?’. Our users, in this case, are focused on the creative thinking process and need to get their answers as quick as possible. In addition, data integrity is also a priority to make sure the whole organization is speaking the same BI language while looking at the same numbers. The efficiency of the tool is measured in the steps users need to take right from logging into the system to getting their answers.

Giving users access to self-service tools such as Analytics for Cloud, Lumira Discovery, Power BI or Tableau will generate more problems instead of solving the simple one at hand. For this need, a self-service report in WEBI, Analysis for Office or a Lumira Designer Dashboard is a better solution. The self-service capability here focuses on providing an analytical template and the flexibility to change dimensions and metrics. Providing extra functionalities will only distract users from consuming information they really need. Furthermore, data governance is secure in this case and IT still helps document the technical definitions of business metrics.

 

To Create Reports and Communicate Business Insights

In this scenario, the business team already has a dedicated resource to create reports and share with the rest of the team. They expect a BI tool that will allow them to:

  • Create visualizations and build reports quickly
  • Easily sharable/distributable reports and dashboards
  • Sharing and commenting capabilities

This group of users still focus on the creative thinking process, but they want more freedom to answer their questions without instilling IT’s help. What they need is an easy-to-use data discovery and reporting tool. In this area, Microsoft Power BI and Tableau are the leaders because of their easy-to-use interface. While having a different approach when it comes to UI design, users can easily create visualizations and reports using both tools. Because our target users don’t need extensive data modeling capabilities, Power BI has an advantage with AZURE, where the process can be taken care of more robustly and efficiently with a solid data governance policy in place. In addition, QlikSensealso has a very intelligent data discovery interface with visual data preparation.

To Build Data Models and Perform Advanced Analytics

Users in this scenario, are more of analytical thinkers who want access to raw data or to build advanced data models (Ex.: predictive or forecasting models). The business team needs to have an in-house data science resource or collaborate with a subject-matter agency. For this type of users, your BI implementation needs to offer:

  • Flexibility and ease-of-use for data modeling
  • Integration with statistical language
  • Integration with different data sources

Again, both Tableau and Power BI are good options in this aspect. If users are familiar with Excel functions, Power BI can offer more capabilities and ease of use with DAX. Tableau has more intuitive and elegant navigation with just enough functionalities to get the job done. TIBCO Spotfire can also be considered for its rich content in predictive and big data analytics, and SAP Analytics Cloud can be an option that focuses on Predictive and Planning functionalities.

 

Our conclusion is that a self-service BI implementation is successful only when users know what questions they want to answer by implementing it. It is important for Business to define the business problem and IT to research and understand what exactly they need to solve that problem. A tool, after all, is just a means to achieve a goal. Any tool can help your organization only if your goal is well defined. It is also important to consider how much time and in-house resource your business team can afford.

Subscribe to our Newsletter

The post A User-Centric Approach to Choosing Self-Service BI Tool appeared first on Visual BI Solutions.

Designing a Slowly Changing Dimension (SCD) in Azure Data Factory using SQL Server Temporal Tables

$
0
0

Temporal tables were introduced as a new feature in SQL Server 2016.  Temporal tables also known as system-versioned tables are available in both SQL Server and Azure SQL databases.  Temporal tables automatically track the history of the data in the table allowing users insight into the lifecycle of the data.

Traditionally, data warehouse developers created Slowly Changing Dimensions (SCD) by writing stored procedures or a Change Data Capture (CDC) mechanism. Temporal tables enable us to design an SCD and data audit strategy with very little programming. Temporal tables store the data in combination with a time context so that it can easily be analyzed for a specific time period.
 

Use Cases of Temporal Tables

  • Slowly changing dimensions– Temporal tables follow a Type 2 SCD which keep a history of dimension table value changes in the database.
  • Data audit– System-versioned temporal tables help audit all data changes throughout the dimension’s lifetime and enable detailed auditing and reporting on the changes.
  • Time travel– They allow us to travel across time periods to analyze the state of the data at any time and to get insights into trends over time.
  • Repair record-level corruption– Act as a backup mechanism to restore data from the history table without any loss of data.

 

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Temporal Table Creation

We can either create a new temporal table or convert an existing table into a temporal table by following the steps outlined below.
 

Creating a new Temporal Table

When a temporal table is created in the database, it will automatically create a history table in the same database, to capture the historical records. We can specify the name of the history table at the time of temporal table creation. If not, it is created with the naming convention CUST _TemporalHistoryFor_xxx.

Syntax:designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Active records reside in the CustTemporal Table

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Historical records (Deleted, Modified) will be captured in the history table CustHistoryTemporal

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

Key points to note before creating the temporal table (refer highlighted areas in the syntax)

  • A temporal table must contain one primary key.
  • The period for system time must be declared with proper valid to and from fields with datetime2 datatype.
  • System Versioning should be set to ON.
  • If you are specific about the name of the history table, mention it in the syntax, else the default naming convention will be used.
  • Other optional parameters like data consistency check, retention period etc can be defined in the syntax if needed.
  • The history table is page compressed.

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

The history table cannot have any table constraints. Indexes or Statistics can be created for performance optimization.
 

Converting an existing table to a Temporal Table

Converting an existing table to a temporal table can be done by setting SYSTEM_VERSIONING to ON, on the existing table. Enabling DATA_CONSISTENCY_CHECK enforces data consistency checks on the existing data. Given below are the steps to be followed for the conversion.

  • Define a primary key on the table, if not defined earlier
  • Add Valid To and Valid From time period columns to the table
  • Alter Valid To and Valid From time period columns to add  NOT NULL constraint
  • Declare System Period column
  • Enable System Versioning on the table

 

Define Primary Key on the existing table

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Add Valid To and Valid From time period columns to the table

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Add NOT NULL constraint

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Declare System Period column

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Enable System Versioning on the table

 

Changing Schema or Dropping the Temporal Table

Schema changes or dropping the temporal table is possible only after setting System Versioning to OFF.

 designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

 

Loading data into a Temporal Table from Azure Data Factory

Copy activity in Azure Data Factory has a limitation with loading data directly into temporal tables. So, we would need to create a stored procedure so that copy to the temporal table works properly, with history preserved. Given below is a sample procedure to load data into a temporal table.

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table
 

Retention Policy

Temporal Tables may increase database size more than regular tables, due to retaining of historical data for longer periods or due to constant data modification. Hence, the retention policy for historical data is an important aspect of planning and managing the lifecycle of every temporal table. If a retention policy is defined, Azure SQL database checks routinely for historical rows that are eligible for automatic data clean-up.

designing-slowly-changing-dimension-SCDazure-data-factory-using-the-sql-server-temporal-table

* * *

Learn more about Visual BI’s Microsoft BI offerings & end user training programs here.

Subscribe to our Newsletter

The post Designing a Slowly Changing Dimension (SCD) in Azure Data Factory using SQL Server Temporal Tables appeared first on Visual BI Solutions.

Design Thinking in a Nutshell

$
0
0

Why Design Thinking?

Since the rise of User Experience Design in software development, the practice of “Design Thinking” has become widely applied in every aspect of the business. Design thinking is not a methodology used by only artists (don’t let the word ‘design’ fool you), but also by everyone who solves problems with a human-centered approach. The rise of this field can be explained by the rapid advancement as well as the saturation of new technology. As end-users of technologies, we have come to realize that fancy-looking tools cannot guarantee our satisfaction because we have barely spent enough time to study our actual needs and goals as well as our thinking and working habits. Most of the time we don’t know what we really need because we lack the tool to get beyond our words and into our unconscious behaviors. That is where design thinking comes into the picture.

In the business intelligence world, user experience plays an even more important role. First, both IT and business users are overwhelmed with tool options, which are, more often than not, costly to implement and complicated to use. BI Vendors are good at creating ‘buzz words’ and promising new fancy functionalities. It becomes very important for organizations to then really understand their users need to pick the right tool and make the right investment. Secondly, most BI end-users are still consuming a large amount of data and spending a lot of time crunching the data manually to get to their answer. If only they could spend more time using the insights from data to act on what they are good at! User experience is the missing piece that improves the efficiency of the collaboration between IT and business.

The good news is the concepts in the field of user experience design are no jargons to anyone. Those are the very basic and intuitive, yet critical points that we often forget to dig deep into. Here are some of the elements of UX that you might want to focus on in your next requirement gathering session:

 

Intention

You are a designer with creative power whether you admit it or not. You have been doing it unconsciously every day of your life. You are the designer of your day: waking up early to catch the morning meeting, eating a little less to keep your weight under control, or hanging your keys near the door so you don’t forget it etc. Most of your behavior in your daily life is designed by you to achieve a goal, whether small or big. Goals and objectives are very important guidelines in a design process and they are the first things you need to get right. This might sound like a paradox, but to be creative, you need constraints. In other words, to think outside the box, you first need a box. These constraints will make sure your creative efforts are spent towards a practical and meaningful purpose. The focus on human needs in UX implies that we are starting to realize the only thing we need to focus on, in our creation: values for users.

What usually goes wrong in BI projects is people lose track of their objectives – Why did we do this in the first place? The only way for these objectives to stick is to make them relatable to everyone involved. In other words, those objectives need to solve users’ current problems and more importantly, they should be deeply aware of that.

While identifying the goal and objective of a project, we should not include the tool. The objective is not “create dashboard/report for the Marketing Team”. It should be something like “Provide real-time access of Marketing Metrics across Brand Management, Demand Creation and Fulfillment to the Marketing Team”. The value of the final product needs to be recognized upfront.

design-thinking-nutshell

 

Here are a few points to keep in mind while identifying goals and objectives:

  • Spend a lot of time clarify the scope, goals, and objectives of the project. Try to use simple verbiage to make sure everyone can relate to it.
  • Identify users’ specific problems from different perspectives. The solutions to these problems need to be addressed in the objectives of the project.
  • Make sure all project stakeholders understand the values of the solution you are designing, whether it is to save time or save money. More importantly, they need to understand their personal gains. For example, business users can have more frequent access to data and data analyst can save time on manual data extraction and aggregation tasks.
  • Keep the list of objectives to a maximum of 5 and prioritize them according to users’ need.

 

Iteration

Iteration is a critical part of the design process. It helps teams realize that the design process is cyclic and that the solution is constantly evolving. This idea encourages team members to generate feedbacks and ideas contributing to improving the product. The core of iteration dictates that there is no final solution but iteration. But of course, we cannot count the success of the design by the number of iteration cycles. Gathering and analyzing feedback is also important for the final solution to move in the right direction. To achieve this, goals and objectives need to be clearly defined at the beginning and constantly reflected upon. These constraints should also provide guidance on what is a good enough iteration that can go live.

design-thinking-nutshell

 

Thinking Out Loud

For the chosen iteration to work, we need a way to communicate our ideas and test our actual behavior with a product to generate useful feedback. That is why wireframes, mock-ups, storyboarding and prototype play important roles in the designing thinking process.

 

1.    Sketch

Putting down your thoughts on paper is a great way to think. Never underestimate the power of a pen and a piece of paper. It lets you see and evaluate your thoughts from a different perspective – an outsider. Drawing out your thoughts also helps others understand it better. Why spend a lot of words trying to describe when you can use the skill of kinder gardeners to draw some shapes and lines. Once you get your idea out of your head and create a tangible object – like a sketch, you make it open-source. It means you welcome others to develop and iterate upon your idea. Don’t be afraid to sketch, you have a better chance of keeping people’s attention by drawing rather than speaking only.

Sketches are used to generate quick and simple ideas such as visualization. Each sketch is to address one specific problem to be solved.

  • If possible, make whiteboard, paper sheets, markers and sticky notes available during your meeting. Be a leader and initiate this visual thinking
  • Encourage everyone to express their ideas by improvising upon each other’s drawing
  • Take pictures or document all these artifacts to reflect on the ideation journey later

design-thinking-nutshell

 

2.    Wireframe

When you combine small ideas to make a more comprehensive and structured product, you create a wireframe. The wireframe also shows how you organize the content, components as well as functionalities within the dashboard. It provides the big picture of your UI. Structure and coherency are the keys in a wireframe: how you combine small answers to make a comprehensive solution.

design-thinking-nutshell

design-thinking-nutshell

 

3.    Storyboarding

This approach frees the human’s creativity by combining a psychological helper: story-telling. This story is about your users. A lot of times, the focus of the developer is on how this application functions, while they need to think more about the user’s journey. Storyboarding starts with interviewing your users about their daily tasks performed along the scope of the problem your dashboard is trying to address. There are a few questions to consider:

  • What do they need to accomplish with the data given?
  • What questions do users want to answer with the data?
  • What insights prompt them to act?
  • What actions do they take?

 

Then attempt to write a short story describing the process that users go through like a narrative. Here is an example story of a Product QA Manager:

When a certain product has defects and has been sent back for the quality issue, the quality manager would start by searching for the serial number. This search will return the Part Number related to this product. His experience will tell him which part component to investigate, given the reported issue. The user then looks for the testing history of this part: on which production lines, which testers, and what were the errors that occurred. He needs to contact the testing engineer in charge regarding the errors. To do that, he needs more details from the error log: which gives him the detail of who oversaw testing in that line at the time when the error occurred.

It will even be more effective when you can get the user to tell the story with specific names as an example. That way users will feel more involved and can relate more easily to the story.

 

The next step is to illustrate the story through a storyboard, which consists of different screens to illustrate the steps that users go through. The focus of storyboarding is the flow, which needs to follow the users’ task-flow closely.

design-thinking-nutshell

 

4.    Prototype

A further step is to build a prototype. In the scope of our discussion on BI, this refers to an interactive application. The focus of the prototype is the users’ interaction with the tool, from which we can learn about their unconscious behavior as well as how the tool can help users achieve their goals more efficiently. Because prototyping is a part of the iteration process, we need to churn it out as quick as possible. Therefore, our focus is not on aesthetic elements or quality of code, but rather, on demonstrating interaction and workflow. The closer the content of the prototype is to the real-life scenario, the more useful becomes the feedback from users, because they can better relate to it.

 

Collaborative Creativity

All the methodologies listed above serves two purposes: encourage individual creative confidence and facilitate collaboration. The first initiative to take part in the design thinking revolution is to believe in your own creativity. It might have been a long time since you last built something out of Lego bricks or made sketches, but it is human nature to create and it will always be there. Furthermore, communicating ideas verbally and visually is an essential skill that helps you evaluate your own ideas as well as collaborate with others. At the end of the day, well-defined goals and objectives will keep the creative process on track towards the creation of meaningful values.

Subscribe to our Newsletter

The post Design Thinking in a Nutshell appeared first on Visual BI Solutions.

Viewing all 989 articles
Browse latest View live