Quantcast
Channel: Visual BI Solutions
Viewing all 989 articles
Browse latest View live

6 Steps to Perfect the Art of Gathering Requirements from Users

$
0
0

A most important milestone in building a dashboard is the meeting among all the stakeholders, especially end-users, to discuss and agree on the deliverables. This process is called requirement gathering meeting. At Visual BI, we call these sessions design workshops as we believe that requirements are a designed product, created as an outcome of the collaboration among stakeholders. Here are the critical points of the discovery process to deliver what users need without spending days on user research.

 

1. Don’t gather requirements. Discover them.

The word “gather” infers two dangerous assumptions about our requirements. First, it tells us that requirement is already there, like fruits on a tree, and our job is to pick up or ask for it. Secondly, it inherently assumes that users know what they want. That will lead us to the most fundamental mistake – asking the user “what do you want?”

So here is the first principle of user experience design:

People don’t know/ realize what they really want.

The downside of asking that question is that people will give you the wrong answer. Since it is not common for people to admit that they do not know what they need, end-users tend to give us what they think is the solution to their problems. Wearing the hat of an experience designer, we need to discover what users need by listening to and understanding their problems and motives in the first place. In other words, we care about the users so much that we don’t take their words for granted. Thus, the first step towards discovering requirements is to clear your head of any preassumptions, be genuinely curious to be in users’ shoes to understand their problems, and an excitement to discover requirements.

Then, how do we look for what users want? By investigating their goals, workflow and pain points.

 

2. Understand the user’s goal

The function of a dashboard or any BI tool is to help the user achieve specific goals. It does not matter how many cool features or functionalities the tool has, our user needs to accomplish those goals to be satisfied. Since goal-setting is not a technical process but a human process, it is crucial to have a focused and separated conversation about goals and objectives, without referring to the tool.

What users want to achieve has a lot to do with their job and responsibilities. The first step is to identify the target audience for your dashboard and what their roles and responsibilities are in the organization. For example, within the Sales department, different user groups have different roles:

  • A Sales Representative needs to meet sales targets or maximize sales within a specific time frame.
  • Sales Manager needs to evaluate sales representatives’ performance to determine bonuses and promotions.

While looking at the same sales transactional data, they will look for different answers.

A typical Sales Representative will need to know:

  • How far are they from achieving their sales target?
  • How many days are left in their sales cycle?
  • What is the total value of unclosed deals? Will those help them make their target?
  • What are the potentials of each deal closing? How soon can it be closed? (predictive analytics)

A Sales Manager will want to answer questions such as:

  • Who have met sales targets and who have not for this month/quarter/year?
  • How consistently has a sales representative been able to keep up with the goal?
  • Top or bottom performers in a specific category such as product type or region.

Understanding what questions users need to answer is an essential step to designing meaningful visualizations and storyboards.

 

3. Understand users’ workflow

After identifying the user’s goal, we need to ask about the current path user takes to achieve it. This end-to-end process starts with the trigger, initiation of seeking out information, and ends with the final accomplishment. Analyzing this flow means we’re looking into the details of the user’s decision-making process. A dashboard or report is an information portal and just a part of this process. The structure and interaction within the dashboard need to closely align with this workflow so that the user can find the information they need as quickly as possible. Knowing what triggers him to seek out answers from the data will give us an insight into the context the user has in mind. As a routine, does the user look at the dashboard on a weekly, daily or monthly basis? Or, is it after receiving an alert from someone else? This context will help us prioritize the information or messages that are most important to users.

The second aspect of analyzing the user’s workflow is to identify which steps directly contribute to users’ decision-making process, which steps are friction and should be eliminated. Here, we can quantify the values of our solution by specifying how many steps it can save the user.

6-steps-perfect-art-gathering-requirements-users

 

4. Understand users’ problems

People love talking about their problems. That is an opportunity to get users engaged by focusing on solving their most critical pain points. Make sure the problems are directly related to achieving users’ goals. For each problem, we need to evaluate the feasibility of solving it: the resource, technical and time constraints. Then we can prioritize our solutions within the scope of the current project. These problems can include:

  • Users don’t trust the data they see (data integrity)
  • It takes a long time to get what they want
  • Manual manipulation of data is involved and is difficult to validate

As we lay out the plan to take away users’ pain, they will recognize the benefits instantly. Users might not see the potential values of new features yet, but they can feel the relief from solving current problems. Getting user excited about the values of the project will motivate them to stay engaged and be more open to sharing their thought process and feedback.

 

5. Write a user story

It’s time to consolidate everything you have learned about your user in a story. For each target audience of your dashboard, write a story about a user and the journey to accomplish his goals. Let this hero have a name (make sure no one is uncomfortable with the choice of the name) to make it specific and relatable, then describe every possible scenario that our hero might encounter.

An example story about how a test engineer does his job

Mark, a Test Engineer is notified by the QA manager of test code that it is affecting a production line (Trigger). He needs to investigate the history of occurrence for this test code on this production line. Mark needs to know how often this failure has occurred. Then he opens the scope of the search to other product lines and parts. Within each test code, there are many different descriptions of the problem. Mark needs to find out which are the most frequent ones and how these issues have been addressed in the past (Status: fixed or not fixed). After understanding the issue, Mark summarizes his findings and proposes a solution to his manager.

A story like this will help users get into the right mindset, better visualize their workflow, and give you more meaningful feedback on how to improve the process. Therefore, the story should be specific and relatable to users.

 

6. Sign-Off

To finish the meeting, summarize your findings of users’ story and make sure everyone agrees on them:

  • The target audience for the dashboard, their roles and responsibilities (who are the users?)
  • Main goals that can be accomplished with the dashboard (by the target group of users)
  • Questions users need to answer to achieve those goals
  • Current workflows followed by users to solve the above questions

All these findings will not only provide a concrete blueprint for your dashboard but also help evaluate the values of the final product. During user acceptance testing, users will assess the proposed values:

  • Can I accomplish my goal with the tool?
  • Does the tool help improve my workflow?
  • Does the tool solve the most critical problems I have?

The six steps above align with the principle of design thinking or human-centered design. This approach focuses on discovering and solving people’s problem by collaborating, iterating and visualizing.

 

Reach out to us for a Design Session here.

Subscribe to our Newsletter

The post 6 Steps to Perfect the Art of Gathering Requirements from Users appeared first on Visual BI Solutions.


Row Level Security – Azure SQL Server Security Recommendations (Part 1)

$
0
0

We will be discussing some of the SQL Server security features available in the Azure Database/Data warehouse in this blog series. Security is a key issue while people are actively putting their data in the cloud. In this blog, we will review how Row-Level security features can be used to enhance security in Azure.

 

Row-Level Security (RLS)

Very often, we come across situations where users from multiple departments within the organization use the same employee table. So, any user within the organization could access your personal contact information. To avoid such cases, we could use Row Level Security (RLS) to control access to rows in a table to a specific group.

RLS allows you to implement fine-grain control over your data. Also, this restriction is applied at the database level using inline functions, so, these restrictions will be applied to any data-access on those tables. Hence RLS acts as a centralized security logic.

 

RLS Supports two kinds of security predicates

  1. Filter predicate: It silently filters records while reading. It is applicable to SELECT, UPDATE and DELETE operations.
  2. Block predicate: They explicitly block DML operations on records which violates the predicate. It is applicable to INSERT, UPDATE and DELETE.

In this blog, we will be covering only Filter predicate. Click here to read about Block Predicate.

 

Filter Predicate

Since the filter predicate is applied implicitly, the user/application is usually unaware of the filtered rows. The user/application will be allowed to make inserts even if it violates the filter predicate. Filter predicate is applicable in SQL server 2016 (from 13.x), Azure SQL DB and Azure SQL Data Warehouse. In fact, RLS filter predicate could be applied even to external tables in Azure SQL DW.

Here I have created an employee table, with the following records:

row-level-security-azure-sql-server-security-recommendations-part-1

 

And a department table with the following records:

row-level-security-azure-sql-server-security-recommendations-part-1

 

Since the employee table has personal information like the phone number, we are going to create a security protocol over this table. Security protocol will be as such that the employee can only read only their record and the respective department heads will be able to read all records of the employees assigned to their department.

Firstly, we will create a security function which takes ‘EmpName’ and ‘DeptId’ as parameters. This function will restrict access by either ‘EmpName’ or ‘DeptName’ by equating them with a username. Since we are applying the predicate over a foreign key, we have used subqueries to implement this.

row-level-security-azure-sql-server-security-recommendations-part-1

 

Now we add to this function, a security protocol over the Employee table. We can turn off the security protocol by altering the state to ‘OFF’.

row-level-security-azure-sql-server-security-recommendations-part-1

 

Now, we have a filter predicate applied on the Employee table. Even if you select as an admin user, you will get an empty result set. To test the filtering, we created two users ‘harry’ and ‘fly’. When selected with ‘harry’ as the user, we get only the employee record for ‘harry’ and while querying with the user ‘fly’, we get the record for ‘hermione’ as she is the only user in the ‘flying’ department.

row-level-security-azure-sql-server-security-recommendations-part-1

row-level-security-azure-sql-server-security-recommendations-part-1

 

That’s all for this blog. In the next blog, we will be visiting block predicate, best practices, and vulnerabilities of RLS.

 

Click here to take a look at our Microsoft Azure Offerings.

Subscribe to our Newsletter

The post Row Level Security – Azure SQL Server Security Recommendations (Part 1) appeared first on Visual BI Solutions.

Row Level Security – Azure SQL Server Security Recommendations (Part 2)

$
0
0

This post is the continuation of the Azure SQL Server security recommendations series. In my previous blog, we went through the filter predicate. In this blog, we will cover block predicate and highlight some of the best practices recommended by Microsoft.

 

Block Predicate

Block Predicate blocks users from making changes to the table which violate the predicate. It covers UPDATE, INSERT and DELETE operations. It is applicable only to SQL Server 2016 (from 13.x) & Azure SQL DB and doesn’t work in Azure SQL Data Warehouse.

There are different security protocol options available in block predicate:

  • BEFORE UPDATE – Checks the existing row values against the predicate rules and then updates
  • BEFORE DELETE – Checks the existing row against the predicate rules and then deletes
  • AFTER UPDATE – Blocks update if the new value doesn’t pass the predicate
  • AFTER INSERT – Blocks insert if the new row doesn’t pass the predicate.

 

We will take the same example of employee and department table as we saw in our previous blog. In our scenario, the employee can make changes only to their records and the department heads can add or make changes to employee records in their department.

Employee table:

row-level-security-azure-sql-server-security-recommendations-part-2

 

Department table:

row-level-security-azure-sql-server-security-recommendations-part-2

 

We will use the same security function that we created for filter predicate, but, we will add block security protocols to it as shown below:

row-level-security-azure-sql-server-security-recommendations-part-2

row-level-security-azure-sql-server-security-recommendations-part-2

 

Now, that we have created our security protocol, let us test them with our test users- ‘harry’ and ‘fly’.

Let’s try to update a record for an employee named ‘ron’ with ‘harry’ as the user and see how the security protocol works. We will get 0 rows affected because according to filter predicate, user ‘harry’ doesn’t have access to update over other records.

row-level-security-azure-sql-server-security-recommendations-part-2

 

Likewise, let us try to update the department of the employee named ‘hermione’ to ‘Magical Portions’ using the ‘fly’ user:

row-level-security-azure-sql-server-security-recommendations-part-2

 

Here, the permission to update is denied because of the AFTER UPDATE predicate.

Current predicates wouldn’t allow an employee to be transferred from one department to another. So, to facilitate that, we can remove the AFTER UPDATE predicate. This way, department user ‘fly’ will be able to update the department of an employee within the department. Let’s remove this predicate and try running the same query.

row-level-security-azure-sql-server-security-recommendations-part-2

 

Now the employee ‘hermione’ has to be reassigned to ‘Magical Portions’ department.

row-level-security-azure-sql-server-security-recommendations-part-2

 

This is how the Block Predicate can be used to prevent unauthorized DML on the tables containing sensitive information.

 

Microsoft has laid down some best practices for Row Level Security, which applies for both filter and block predicate, as mentioned below:

  • It’s recommended to have a dedicated schema for security protocols and functions.
  • Scrutiny should be exercised while granting ALTER ANY SECURITY POLICY to any user. Only users like security admin should be given such elevated privileges. Also, the security admin doesn’t need to have read access over the tables. A malicious security admin could collude with other users to obtain/retrieve sensitive information.
  • Avoid type conversions in predicate functions to avoid run-time errors.
  • Avoid recursion in predicate functions, as this will affect the performance.
  • Be cautious in using large joins in predicate as it will affect the query performance.
  • Avoid predicate which depends on the session properties like implicit date conversions etc.
  • It is possible to derive some sensitive information using carefully crafted brute force queries. For instance, a user could divide the salary column by randomized number to derive at another particular user’s salary, because the runtime would throw a divide by zero exception before it could check the predicates. So, it is advisable to enable audit log on the database to monitors such accesses.

With this, we have reviewed Row Level Security features in Azure SQL Server. In the next blog, we will visit the Dynamic Data Masking concept as a continuation of the Azure SQL Server Security recommendations.

 

Click here to take a look at our Key Microsoft Analytics Offerings.

Subscribe to our Newsletter

The post Row Level Security – Azure SQL Server Security Recommendations (Part 2) appeared first on Visual BI Solutions.

Data Visualization Foundation– Communicating with Charts & Graphs

$
0
0

Data Visualization as a Tool of Communication

Despite the number of articles and books on data visualization, many people are still intimidated by the process of creating and formatting charts. Why can’t we have a precise formula that works? There is only a limited number of chart types, especially in the business context, after all. That is because data visualization, more than choosing a chart, is the art of communication, a rather intricate human process. Our first problem is that we don’t know the questions and answers we are looking for, neither the right chart types. Secondly, most of the articles on choosing the correct chart type focus on the bottom-up approach, which suggests chart type based on the properties of data. What is missing is the viewer’s perspective: what information he is looking for from the visualization. That context will determine how we design the message and how the user interprets what he sees.

To master the art of data visualization, first let’s revisit its purpose:

data-visualization-foundation-communicating-with-charts-graphs

 

The goal of a simple chart is to communicate some information to the viewer. So, what is information? Here is an example. Look at the table below with some data points and make as many conclusions as possible.

data-visualization-foundation-communicating-with-charts-graphs

 

The information we get from this data is:

  • Andy ate the highest number of apples, while Dylan ate the least
  • Andy ate half of the groups’ total count of apples
  • The group consumed a total of 18 apples
  • Andy ate twice as many apples as Bob

Information is what we learn from observing our environment, or, in this scenario, data, and reasoning. Based on the same data, viewers can generate different kinds of information and insights depending on the context in their mind (which is shaped by knowledge and experience). There can be much information to be interpreted from our example, how can we determine what is relevant and what isn’t? It all depends on why we seek out that information in the first place. If that data comes from an apple eating contest, we need the first conclusion. If we need to know how much each person is paying for the apples, then the second information is helpful.

Who wins the Apple Eating contest?

data-visualization-foundation-communicating-with-charts-graphs

 

How are we splitting the bill?

data-visualization-foundation-communicating-with-charts-graphs

 

This simple example illustrates two part of the communication process – the intention (why) and the message or information delivered to the receiver (what). When it comes to data visualization, first and foremost, it is essential to understand the reader’s purpose and the type of information he is inquiring. This knowledge will help us craft the quantitative message and then encode it in an appropriate chart type.

 

The Advantage of Data Visualization – Why We Need Charts to Tell the Story

The second aspect of data visualization is that it efficiently delivers the message. That is because it takes advantage of the nature of our brain, as mentioned by Jon Medina in his book “Brain Rule”:

Visual processing doesn’t just assist in the perception of our world. It dominates the perception of our world.

 

Sight is our most sensitive sense and our eyes are built to help us scan the environment for survival. The sophisticated mechanism of our eyes collects data rapidly and automatically, then sends them to the brain for processing. This collaboration happens unconsciously and saves us mental efforts. Additionally, we love charts because they convert the relationship among numerous data points into shapes and lines, which are easily consumable for our brain.

We pay lots of attention to orientation. We pay lots of attention to the size. And we pay special attention if the object is in motion.

-John Medina, Brain Rules

data-visualization-foundation-communicating-with-charts-graphs

 

Because the objective of data visualization is clear and efficient communication, visual elements in charts and graphs need to focus on bringing out a comprehendible message and eliminating noises.

 

The Context of Data Visualization: Reporting & Discovery

In the BI and analytics world, there are two ways to approach data visualization: discovery and reporting. Each scenario requires a different approach to communicate efficiently with charts.

 

Reporting

When it comes to reporting, the viewers already know what goal they have in mind and what questions they need to answer. These reports often follow specific templates that cater to business customs so that users can regularly gain insights as quickly as possible (Daily, weekly, or monthly). To a user, a report or dashboard provides the information he needs to perform a task or make a decision. To an organization, the goal of reporting is to establish a consistent and universal BI language, hence providing everyone with the knowledge of where the business is and where it is heading now.

In this scenario, data visualization needs to be:

  • Simple so that a broad audience can comprehend that information
  • Consistent in UI and UX, especially verbiage and labeling, across all reports and dashboard
  • Instructive, providing clear definition and clarification as needed

In a reporting context, we use the top-down approach by first understanding users’ questions before creating a visualization.

 

Discovery

In the discovery process, while the goal is defined, the work-flow is ambiguous. The user does not necessarily have a specific question in mind. By interacting with the charts, the user generates different queries on the fly. In this scenario, we need to design a self-service experience that lets users slice and dice data with different visualizations in an intuitive and human-oriented environment.

An example of a good self-service experience that provides the tool to users from their perspective is SAP Analytics Cloud (SAC).

data-visualization-foundation-communicating-with-charts-graphs

 

Instead of just giving them the chart types such as lines, column, and pie, SAC suggests the type of quantitative relationship that can be described by that chart type – comparison, trend or correlation. These labels are more meaningful to users because the ultimate question they have is not “what chart should I use?”, rather, it is “what do I want to know about these metrics?”. While this can be considered a bottom-up approach because the chart type depends on data, the experience is designed top-down from the user’s perspective. The languages used in the labels are more relatable to our analytical reasoning.

In a data discovery process, data visualizations need to be:

  • Appropriate for the nature of data (number of values, unit or scaling)
  • Relatable to user’s thought process, inspiring users to formulate questions on their own
  • Dynamic and flexible, catering to different possible scenarios

data-visualization-foundation-communicating-with-charts-graphs

 

To summarize, charts and graphs are a useful tool to communicate information from underlying data. However, to communicate effectively, we first need to understand ours as well as the viewers’ goals and objectives. Only then can we formulate the messages to end-users and deliver them with an appropriate chart type. When designing a reporting dashboard or providing a self-service experience for the user, our goal is to help users either answer questions they already have or help them generate questions most intuitively and efficiently.

 

Reach out to us here if you have any queries on the Data Visualization process.

Subscribe to our Newsletter

The post Data Visualization Foundation– Communicating with Charts & Graphs appeared first on Visual BI Solutions.

Dynamic Data Masking

$
0
0

This post is the continuation of Azure SQL security recommendations. In the earlier posts, we visited the concept of Row-level security for limiting user access to related data. Today, we will look at how to mask sensitive data using Dynamic Data Masking feature.

 

Dynamic Data Masking (DDM) abstracts sensitive data from non-privileged users. Since DDM is applied at query execution time, we can restrict data exposure to application layer with minimal impact on the underlying data source. There are multiple masking options available such as default value, partial and random number. This feature is currently available in SQL-Server 2016 (from 13.x) and Azure SQL Database. This feature is yet to be released for Azure SQL Data Warehouse and Parallel Data Warehouse.

Here are functions available in Dynamic Data Masking:

Function Description Snippets
Default Automatically masks the values with regards to the datatype

For string, it is usually masked by XXXX.

For decimal, the default value is 0

For Datetime, the default value is low date (1900-01-01 00:00:00.000000)

For binary datatypes (binary, varbinary, Image), the default values are ASCII values of 0

[salary] [decimal](10,2) MASKEDWITH (FUNCTION= ‘default()’)NULL
Email This function is just dedicated to mask email data. It exposes the first letter and the email suffix (.com, .in) [email] [varchar](50)MASKEDWITH (FUNCTION= ’email()’)NULL
Random Assign a random number as a masked value for any numeric type column [salary] [decimal](10,2) MASKEDWITH (FUNCTION= ‘random()’)NULL
Custom String Allows custom masking option by giving the first, last and padding letter as parameter. (prefix, padding, suffix) [phonenumber] [varchar](18)MASKEDWITH (FUNCTION= ‘partial(1, “XXXX”, 1)’)NULL

 

One of the common use cases of DDM is to mask phone numbers, email addresses, salary, bank details, etc. Unauthorized individuals having access to such privileged data could result in the misuse of your personal information. This is where DDM could be your savior.

 

Here I have created an employee table as seen below and masked salary, phone number and email columns:

dynamic-data-masking

 

User, who has “unmask” privileges, would see the actual data as below

dynamic-data-masking

 

We have now created “test” user and granted them with select privileges on the employee table. So, when this user queries the same table he would see the data as:

dynamic-data-masking

 

Even if you select this data into another table or export into a file, the result will always be masked. So, any application which would be accessing the employee table will always see the masked data, unless the user has the “unmask” privilege. You can grant and revoke access to users as shown below:

dynamic-data-masking

 

Even though sensitive data is masked, you will be able to apply where clause on the masked columns. For example, if you query for records has an email as ‘harry@hogwarts.com’, you will get the following result set:

dynamic-data-masking

 

This feature could be used by an unprivileged user to derive sensitive data using brute force methods. For instance, you could derive the employee salary by running queries as seen below:

dynamic-data-masking

 

This shows that DDM is vulnerable to users with malicious intent trying to access underlying data. So, it is important to properly manage access and only provide access to vetted individuals. Also, it is advisable to enable database auditing to identify such illicit access to data.

With this, I am concluding my Azure Security recommendations. I plan to keep writing on security recommendations in my future blogs. Stay tuned!

 

Click here to take a look at our Key Microsoft Analytics Offerings.

Subscribe to our Newsletter

The post Dynamic Data Masking appeared first on Visual BI Solutions.

User-Defined Schema in Databricks

$
0
0

If you’ve been working with CSV files in Databricks, you must be familiar with a very useful option called inferSchema while loading CSV files. It is the default option that is widely used by developers to identify the columns, data types, and nullability, automatically while reading the file.

 

inferSchema

In the below example, the .csv file is read through spark.read.csv function by providing file path, inferSchema option, and header.

user-defined-schema-databricks

 

By setting the header to ‘true’, Databricks uses the first row of the file for column names.

Below is the code executed in Databricks:

user-defined-schema-databricks

 

With the inferSchema option is set to true, Databricks will run a pass over the complete file and determine the column names, data types, and nullability. The obtained output is the schema of the data frame inferred by Databricks.

user-defined-schema-databricks

 

There are situations where the inferSchema option will not work as expected. It sometimes detects the data type and nullability state incorrectly. In the example above, the Account Number Field is detected as a long data type, but the source has the account number stored as a string.

An identifier field like account number would never be used for aggregation or simple addition and subtraction. Let’s assume it should be kept as a string. Also, the column may contain alpha characters in the future and in that case detecting the data type as a long would cause a failure when loading.

To overcome this, you can apply a User-Defined Schema in Databricks to a file.

 

User-Defined Schema

In the below code, the pyspark.sql.types will be imported using specific data types listed in the method. Here, the Struct Field takes 3 arguments –  FieldName, DataType, and Nullability. Once provided, pass the schema to the spark.cread.csv function for the DataFrame to use the custom schema.

user-defined-schema-databricks

 

In the obtained output, the schema of the DataFrame is as defined in the code:

user-defined-schema-databricks

 

Another advantage of using a User-Defined Schema in Databricks is improved performance. Spark by default loads the complete file to determine the data types and nullability to build a solid schema. If the file is too large, running a pass over the complete file would take a lot of time.

But, User-Defined Schema in Databricks avoids the pass over the file, hence, performance will have a significant improvement with large files.

 

Conclusion

Building a User-Defined Schema in Databricks manually is time-consuming, especially when the file has many columns. However, this method will help to maintain the intended schema and also help improving performance by a great extent.

Let us know your thoughts and feedback about working with User-Defined Schemas in Databricks

 

Click here to take a look at our Microsoft Azure Offerings.

Subscribe to our Newsletter

The post User-Defined Schema in Databricks appeared first on Visual BI Solutions.

Visual BI Solutions Announces its Participation at SAPPHIRE NOW® to Showcase Offerings for SAP Lumira®, SAP Analytics Cloud and End-to-End BI Capabilities

$
0
0

Plano, TX – May 2, 2019 – Visual BI Solutions, an SAP Partner and a niche Business Intelligence (BI) & Analytics firm, today announced it will participate at SAPPHIRE NOW® and ASUG Annual Conference being held May 7-9 in Orlando, Florida in booth #1430A.

In addition to providing a demo of their products, Visual BI Extensions (VBX), ValQ and VBI View, Visual BI will also be meeting with SAP representatives, partners, and customers for in-depth business-driven discussions.

sapphirenow-conference-visualbi-booth-no-1430a-event

 

Visual BI has consistently presented at SAPPHIRE NOW, offering valuable additions to the SAP ecosystem each year. This year, Visual BI will exhibit a variety of products and services offering end-to-end BI capabilities ranging from SAP certified products; innovative solutions; quick-start workshops; strategy/roadmap sessions; and migration, training and consulting.

“SAPPHIRE NOW will be a platform that offers tremendous opportunity for us to showcase our unique value-driven products that help enterprises deliver actionable insights and customer success stories — covering SAP Analytics Cloud, SAP® BusinessObjects™, and SAP BW/4HANA– with both business users and technical support teams,” said Gopal Krishnamurthy, Founder/CEO, Visual BI Solutions.

Click here to view the agenda and register for the event.

 

SAPPHIRE NOW and ASUG Annual Conference are the world’s premier business technology event and largest SAP customer-run conference, offering attendees the opportunity to learn and network with customers, SAP executives, partners, and experts across the entire SAP ecosystem.

 

About VISUAL BI

Visual BI is a leading SAP-certified BI Enablement firm, we provide strategic consulting, software products and solutions that achieve agile, mobile, self-service and real-time BI. 100+ leading global companies leverage our proprietary software products and market-leading expertise in BI & Analytics.

Highlights:

  • Best Companies to Work for in Texas, 2018
  • Ranked in the Top 50 in Deloitte Technology Fast 500, 2015
  • Ranked by CIOReview as one of the Top 100 Big Data Companies in the US
  • Microsoft Gold Partner for Data Analytics & SAP Silver Partner
  • Dedicated Visual BI Labs facility in Carrolton, TX, driving R&D and BI innovations

Visual BI’s end-to-end BI expertise covers platforms such as SAP Business Warehouse, SAP BusinessObjects BI solutions, SAP HANA®, Cloud Enablement & Integration (Azure, AWS, SCP), Big Data, advanced analytics and visualization tools such as SAP Lumira, Microsoft Power BI, Tableau, TIBCO Spotfire and more.

# # #

SAP, SAPPHIRE NOW, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE (or an SAP affiliate company) in Germany and other countries. See http://www.sap.com/corporate-en/legal/copyright/index.epx for additional trademark information and notices. All other product and service names mentioned are the trademarks of their respective companies.

Subscribe to our Newsletter

The post Visual BI Solutions Announces its Participation at SAPPHIRE NOW® to Showcase Offerings for SAP Lumira®, SAP Analytics Cloud and End-to-End BI Capabilities appeared first on Visual BI Solutions.

Tips and Tricks to Building a Well-Structured Semantic Model in Power BI

$
0
0

Power BI is a great tool to use when we have structured dataWe want our tables to be related and easily readable by the person creating the report. However, we don’t always necessarily have data that is well formatted. Here are several tips and tricks to create a well-structured semantic model in case you don’t have that well-maintained data model.

 

Tabular Format

It is crucial to have tabular data to build visualizations in Power BI. Without it, our reports wouldn’t be functional.

To mark as a table:

  • Highlight the table and click Format as Table. (This allows us to read a single table, rather than the whole sheet.)
  • Go to the design tab and change the properties of the table. By default, it will be ‘Table 1’.

This can be now be read by Power Bi as a separate entity to its sheet.

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

Unpivoting Data

Once imported into Power BI, you will notice that we have 6 different columns: one ‘Name’ Column and 5 different columns with a date as the name. This is not easy to work with since we would have to create a Calculated Column in Dax to aggregate the data for all days. In this case, it is best to unpivot the date columns. To do this:

  • Click on the Edit Queries button in the Home tab, which will navigate us to Power Query.
  • Click on the top of the first column, then Ctrl + Shift+ Left Click the last column, which should highlight all the date columns shown below.
  • Click the Unpivot button in the Transform tab. Now we will see hours for multiple dates and names represented in the value column.
  • Change the name of the Attribute Column to ‘Date’ and the Value Column to ‘Hours’. We now transformed to have an entry for each person and for each date.

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

Now let us add an ID to help us distinguish between each entry.

  • Go the Add Column Tab and select the index column drop down. Select ‘From 1’.
  • Drag the Column to the left side to reorder it. Once done it should look like this:

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

Close and Apply the changes. Now we will have 4 columns: ID (which can be hidden), Name, Date, and Hours. We can now create a simple bar chart with total hours per person:

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

Drawing Relationships

Creating relationships allows us to filter values on a table from a different table. This is necessary if you want to compare related values from many tables in your visualizations.

When creating relationships, we have the option of controlling both the cardinality and the direction of the filter.

Cardinality is the ratio of a count of matching values (ex. ID’s) in one table to the other. We aim for a carnality of one-to-many, or many-to-one depending on the order, and only one direction from the table with the single count to the table with many.

In the example, I created another table called Budget with the same structure as the table we created above, however, with different values.

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

We can see that we the same value for ‘John’ multiple times in each table. If we drew a relationship at this point, it would be a many-to-many relationship. This is not recommended, especially if you want your model to scale out to more tables.

An easy solution is to create a new master table that represents a distinct value from each table. To do so:

  • Create a New Table in the Modeling tab called name, which will only display the names of the people.
  • We then write a simple Dax formula:

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

  • Draw our relationships to represent one to many.

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

We now are ready to build a visualization using all 3 tables. We want to make sure that we use the ‘NAME’ Column in the new table as an axis or intermediary between the two tables values.

Below is the ‘Line and Clustered Column Chart’ that compares the actual hours in the first table to the budgeted Project hours.

tips-and-tricks-to-building-a-well-structured-semantic-model-in-power-bi

 

Many of the techniques above can also be replicated within Azure Analysis Services or SSAS (Tabular). Both Power BI and SSAS use a very similar GUI and both support DAX. Generally, we would like to upgrade to SSAS when our data starts to grow too large for Power BI. SSAS allows us to share our data model easily through-out the entire enterprise while also giving us several options of front-end tools to work with.

Subscribe to our Newsletter

The post Tips and Tricks to Building a Well-Structured Semantic Model in Power BI appeared first on Visual BI Solutions.


Fetching Historical Data in Azure Data Factory

$
0
0

Triggers are like job schedulers for the execution of pipeline runs in the Azure Data Factory. Presently there are three types of triggers that are supported in ADF.

  1. Schedule trigger: A trigger that executes a pipeline on an absolute schedule
  2. Tumbling window trigger: A trigger that operates at periodic intervals and also retains state
  3. Event-based trigger: A trigger that responds based on events

In this blog, we will discuss the tumbling window trigger and how it supports fetching historical data in the Azure Data Factory.

 

Tumbling Window Trigger

A typical ETL Package is built to compute data from a point in time forward. So, historical data will not be loaded from source to target and a separate load is required between the datasets. The process of adding missing data from the past to the target is termed as Historical Data Collection or Data Backfilling. In traditional ETL, backfilling of data requires enormous amounts of manual work and time to build effective SQL scripts. Microsoft has provided a feature named Tumbling Window Trigger, which is primarily designed for fetching historical data using Azure Data Factory. A tumbling window trigger will fire in a sequence of non-overlapping and contiguous periodic time intervals from a specified start time while also retaining state.

 

Execution of the Tumbling Window Trigger

We first need to create a tumbling window trigger for fetching historical data in Azure Data Factory under the Triggers tab by defining the properties given below.

  1. Name – Trigger Name
  2. Type – Type of the trigger – ‘Tumbling Window’
  3. Start Date (UTC) – The first occurrence of the trigger, the value can be from the past
  4. Recurrence – A frequency unit at which the trigger recurs. The accepted values are minutes and hours. For example, if the trigger needs to trigger once every day, then the recurrence is set as 24 hours
  5. End (UTC) – The last occurrence of the trigger, the value can be from the past
  6. Delay – The amount of time to delay before starting the data processing for the window. This will not affect the Start Date (UTC)
  7. Max Concurrency– The number of active synchronous trigger runs that are fired for windows
  8. Retry Policy: Count– Number of retries if the pipeline run fails
  9. Retry Policy: Interval in seconds – Delay between retry attempts

In the pipeline section, execute the required pipeline through the tumbling window trigger to backfill the data.

In the example below, I have executed a pipeline run for fetching historical data in Azure Data Factory for the past 2 days by a tumbling window trigger which is a daily run. I have taken 04/22/2019 as the current date so the start date will be 04/19/2019 as it is two days prior to the current date.

fetching-historical-data-in-azure-data-factory

 

 

The execution result of the trigger run is in the form of output values of the tumbling window trigger:

fetching-historical-data-in-azure-data-factory

 

  1. Trigger Time – Current Time
  2. windowStartTime – The actual window start time which is preserved with the periodic interval from last trigger window start time. The first window start time is the actual trigger start time that has been scheduled manually by the user. After that, it is calculated by adding recurrence value to the last windowStartTime.
  3. windowEndTime – The actual window end time and it is calculated based on window start time by adding the recurrence value to it.

The windowStartTime and windowEndTime can be passed as values for the pipeline run to fill timestamp parameters within an activity and for fetching historical data in Azure Data Factory. Add a dynamic parameter for timestamp and call this parameter using the expressions given below:

  1. windowStartTime – trigger().outputs.windowStartTime
  2. windowEndTime – trigger().outputs.windowEndTime

 

Note:

  1. Trigger name acts as a unique identifier. Modifying the properties after publishing a trigger, followed by one execution of the trigger run already completed, will not affect or re-execute the past run. This happens because the trigger checks for the trigger name and considers the backfill as already completed. The modifications will be active for the future pipeline runs only.
  2. Once the trigger is published, we cannot make modifications to the start date, however, the end date can be modified.
  3. The trigger is executed on a regular interval mentioned in recurrence which is based on the start date. Once the trigger is published, it will wait until the recurrence. The backfill will start only after that. For example, if I schedule a run at 9:50 AM at a frequency of 24 hours and publish the trigger at 8:30 AM, the trigger will wait until the recurrence time which is 9:50 AM to kick off the pipeline run.
  4. One tumbling window trigger can execute only one pipeline and it is a one-to-one relationship between them.

 

Learn more about Visual BI’s Microsoft Azure offerings here.

Subscribe to our Newsletter

The post Fetching Historical Data in Azure Data Factory appeared first on Visual BI Solutions.

Data Security in SAP Analytics Cloud

$
0
0

In SAP Analytics Cloud (SAC), the data models are accessible to all users by default. In order to provide data security, SAC has various options under ‘Access and Privacy’ of ‘Model Preferences’. In this blog, we will go over the different options available to restrict access to the data models.

 

Data Security at the Model Level

By enabling model data privacy, you can restrict user access to the model. Only the owner and users with roles that are granted access can access data from the model.

Please note that ‘Model Data Privacy’ can only be enabled in imported/acquired models.

data-security-sap-analytics-cloud

 

Then in the ‘Access and Privacy’ tab under Data Access enable ‘Model Data Privacy’ option.

data-security-sap-analytics-cloud

 

For users who do not have access to the model, the error ‘You have no authorization to the model’ is displayed as shown below.

data-security-sap-analytics-cloud

 

Data Access Control in Dimensions

Apart from restricting the entire model, you can also restrict access to specific dimensions from the ‘Model Preferences’ popup. Please note that ‘Data Access Control in Dimensions’ can only be enabled for imported/acquired models. Once restricted, the user with restricted access will only be able to see the dimensions to which the user has access to. All the other dimensions will not be visible.

data-security-sap-analytics-cloud

 

Data Access Control to Members of a Dimension

In order to allow only partial access to the data in the dimension for certain users, you would need to customize the data access control further.

data-security-sap-analytics-cloud

 

Select the dimension name and then switch to grid mode.

data-security-sap-analytics-cloud

 

Here you will see two columns, ‘Read’ and ‘Write’. You can select a cell and add users for ‘Read’ and ‘Write’ columns.

data-security-sap-analytics-cloud

data-security-sap-analytics-cloud

 

Now after specifying the users, save the model. Only these users will have rights to Read/Write data for the selected dimension.

data-security-sap-analytics-cloud

 

Restricted Export to CSV

All users can export the models to a CSV file, by default. To restrict users from exporting data to a CSV file, you would need to enable the option ‘Restricted Export’ in the ‘Model Preferences’ popup. Unlike the above-discussed options, you can restrict the export even for LIVE models.

data-security-sap-analytics-cloud

 

With these privacy options, SAP Analytics Cloud allows us to customize and manage data security easily.

 

Reach out to us here today if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post Data Security in SAP Analytics Cloud appeared first on Visual BI Solutions.

Visual BI’s 3rd Party Business Content for SAP Analytics Cloud

$
0
0

Business content for SAP Analytics Cloud is a library which consists of ready-to-run analytical application templates, based on replicated data facilities with respect to specific business scenarios, which requires no additional systems to view it. The content comes ready to run with in-built data, and further lets you supplement your enterprise data to build more personalized use cases and derive insights. With very minimal effort on customization, the stories and visuals can be plugged in to consume data from your environment by changing the connections according to your infrastructure. The connectivity includes any platform that is supported by SAC ranging from Excel, SAP BW, SAP HANA or any other cloud or on-premise data source.

 

Our Solution Highlights

Visual BI has offered three pre-packaged business solutions that deliver leading-edge decision-making capabilities to executives,

 

1. Digital Boardroom Solution for Sales and Distribution Analytics

This content provides a lot of insights on Sales, Product performance, Vendor performance and allows the business users to plan and forecast by product and categories. This pre-packaged business content is a typical scenario for a distribution business that procures multiple products from multiple vendors and then distributes the products to several retail outlets. The KPIs provide a 360 view of the sales across the enterprise, by vendors, products, product categories, and stores, along with planning and forecasting capabilities.

business-content-sap-analytics-cloud

 

2. Accounts Receivables & Cash Flow (AR & CF) Analytics

ARCF solution leverages innovative visualization techniques to analyze Accounts Receivables & Cash Flow for a multi-entity global enterprise dealing in multiple currencies.

The solution allows you to analyze balances & receivables over time and across various aging buckets. You can view the status by invoicing entity that also includes options for intra-company transactions. There is a quick toggle which allows you to switch between Receivables and Cash Flows with filters by geography, currency, customer & sales representatives. Accounts Receivables & Cash Flow for enterprises dealing in multiple currencies can be compared as well. Outlying customers for each sales representative can be identified using a box-plot analysis and doubtful debts, using the bubble chart, where you have options to filter by overdue balance, doubtful debt, and percentage of the doubtful debt.

business-content-sap-analytics-cloud

 

3. SAP Digital Boardroom for Upstream Oil & Gas

This pre-packaged business content helps executives track and benchmark performance across Operations, Finance, Procurement, and Health & Safety.

The home page provides a 360’ view of the business by delivering quick insights on performance for the current period. Executives can opt to drill down into specific KPIs of interest.

This solution lets you view how the KPI’s are distributed across the different metrics in a tile layout. It further allows to drill into the details using predefined hierarchies or use the Jump functionality to navigate directly to the KPI-based analysis.

business-content-sap-analytics-cloud

 

Features of the Business Content

  • Rapid go to Market – Plug and play solution with minimal customization efforts, where the content can be used for your business.
  • Highly interactive solutions, with the ability to perform linked analysis with almost all the visuals that are available in the content.
  • Bring your own data (BYOD) and leverage existing models and stories to deliver an engaging SAP Digital Boardroom solution for executive and end users.
  • The KPIs are customizable and so are the stories and visuals to suit your enterprise needs.
  • The solution leverages SAP Digital Boardroom and Planning capabilities for maximum impact and performance.

 

For more details on each individual package, explore the content library in the application or check out the below-given links to SAP App Center,

  1. SAP Digital Boardroom for Sales and Distribution Analytics – https://www.sapappcenter.com/apps/36473/visual-bis-digital-boardroom-for-sales-and-distribution-analytics
  2. Accounts Receivables & Cash Flow Analytics – https://www.sapappcenter.com/apps/36477/visual-bis-accounts-receivables-and-cash-flow-analytics-for-sap-analytics-cloud-sac
  3. SAP Digital Boardroom for Upstream Oil & Gas – https://www.sapappcenter.com/apps/29753/visual-bis-sap-digital-boardroom-for-upstream-oil-gas

 

Reach out to us at solutions@visualbi.com for a personalized offer covering one or more of the following:

  1. Strategy Workshops (<1 day)
  2. Proof of Concepts (1-3 weeks)
  3. Custom Implementation and Training

Subscribe to our Newsletter

The post Visual BI’s 3rd Party Business Content for SAP Analytics Cloud appeared first on Visual BI Solutions.

Bookmarks in SAP Analytics Cloud

$
0
0

[Last updated: 26th of April 2019, SAP Analytics Cloud Version 2019.8]

Creating a bookmark gives you quick and easy access to frequently used scenarios. Using this feature, you can save the state of input controls, prompts, charts, tables, and geo filters and revert to them when needed. SAP Analytics Cloud allows you to switch between multiple scenarios with ease by letting you create multiple bookmarks. This blog will describe how to implement the bookmark functionality in SAP Analytics Cloud.

 

Private and Global Bookmarks

Consider the story below as an example.

bookmarks-sap-analytics-cloud

 

After you analyze the story and derive actionable insights, you may want to freeze the data to revisit the same view at a later point in time. You can save your selections by bookmarking the story and avoid repeating the steps again to get to that view. When you are done making the required selections in the input controls and are ready to save the view as a bookmark, click on the bookmark icon from the menu on the top and select ‘Bookmark Current State’.

bookmarks-sap-analytics-cloud

 

Save the bookmark with the name of your choice. In this case, it is named as “PAR_VBI_OG_EHS_BM_HIGH”.

You can either create personal story bookmarks which are accessible only to you or share your story bookmarks with others by choosing ‘Global’ as the ‘Type’.

Select the option ‘Set as new default’ if you would like the current saved state to be loaded by default while opening the story next time. Click on ‘Save’.

bookmarks-sap-analytics-cloud

 

SAP Analytics Cloud allows you to create multiple bookmarks for each story and switch between bookmarks when you are in ‘View’ mode. This will help reduce the time taken to repeat all the steps required to get to a view and also the need for multiple copies of the same story in order to analyze different scenarios.

Now in ‘View’ mode, click on the Bookmark icon to view the saved bookmarks. You can see them listed under ‘My Bookmarks’ for personal bookmarks, ‘Original Story’ and ‘Global Bookmarks’.

bookmarks-sap-analytics-cloud

 

Bookmarks for Explorer Views

Bookmarking for Explorer Views works differently from that for Stories.

In your Story, select a chart and then enable Explore. On launching Explorer mode, click on ‘Add New View’. The new view can be customized with charts and filters, saved with a name and viewed later.

bookmarks-sap-analytics-cloud

 

Deleting a component

When you delete a component in a bookmark view, the component gets removed from all the bookmarked views as well as the parent story upon saving the view. A warning message like the one seen below will be displayed. Similarly, when a component is deleted from the parent story, it impacts all the bookmarked views based on that story.

bookmarks-sap-analytics-cloud

 

Mobile Support

SAC bookmarking functionality is supported on mobile devices. If bookmarks exist in the story, the user’s default bookmark opens up when opening the story. The name of the bookmark is visible in the Input Controls panel. One known limitation is that the default bookmark can be changed only in the browser.

 

The roadmap from SAP for SAP Analytics Cloud promises some exciting features and improvements to the bookmarking feature. We will keep updating this blog as and when new features are released.

Reach out to us here today if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post Bookmarks in SAP Analytics Cloud appeared first on Visual BI Solutions.

[Video] ValQ: What’s New – May 2019 Release

$
0
0

ValQ

ValQ allows business users across industries to visualize and optimize profitability & growth across various modules in a flick of a second with top-notch user experience, latest features like the Simulation-Enabled Table View and KPI search option within complex models. ValQ is now available for SAP & Microsoft Power BI.

In this blog, let’s take a quick preview of the new features that are part of ValQ’s May 2019 release:

 

1. New Simulation-Enabled Table View (in addition to the Standard Tree View)

The Table View lets users consume ValQ models in a tabular format.

Through our surveys, we realized that despite having the tree view, quite a few power users also preferred to view the data in a tabular format. While we were at it, we added simulation capabilities to the table as well, so that users can seamlessly switch between the tree and tabular views, without having to lose track of the individual simulations and their impact.

Similar to a Tree View, Table View also comes with Full, Standard and Minimal formats – thereby giving users even greater control on choosing the level of detail to be displayed.

valq-what-new-may-2019-release

 

While clicking on the KPI name, the same popup-screen shows up here as well.

valq-what-new-may-2019-release

 

2. Search by KPI name

Till now, there were a few options to navigate to a specific node –

  1. By organically expanding the tree – which becomes cumbersome for large trees
  2. Using the navigation panel on the left – however, only the key nodes are covered here
  3. Through the change tracker on the top left of the canvas – but this happens only after you have been to the node at least once.

ValQ now has a new Search option at the bottom left of the tree canvas that enables users to search for any node and jump to that node instantly. Search is also available in the tabular view. Both these options now significantly enhance the ease of navigation & access.

valq-what-new-may-2019-release

 

3. Quick Editor – Spreadsheet Like Experience

This design-time enhancement allows users to view and edit all the KPI configurations from a single place with greater ease. Until now, you had to select and dive into a specific node and configure its properties. The Quick Editor gives you a 10,000 ft view so that you can perform several changes at once.

valq-what-new-may-2019-release

 

4. Download Scenario Data

Until now, users were able to download the results of scenario comparison based on a limited set of factors. With this new release, you will be able to download the results of the entire simulation to a spreadsheet.

 

5. Other Latest Features

In addition to the above, there are several other features available as part of the May 2019 release, such as,

  • Enhanced UI/UX for navigation within the model
  • Switching between scenarios on the fly during comparison
  • Dynamic scaling on dynamic models
  • Option to view descendant node count for all nodes
  • Dual modes for waterfall chart displaying variance & simulation breakdown.
  • Template Nodes to reuse repetitive patterns in large models
  • … and more.

 

Video – ValQ: What’s New – May 2019 Release (2 mins)

 

To learn more, tune into one of our ValQ webinars or visit the product website.

Subscribe to our Newsletter

The post [Video] ValQ: What’s New – May 2019 Release appeared first on Visual BI Solutions.

SAP Analytics Cloud Application Design Series: 14 – Passing Parameters to Drilldown WebI Report

$
0
0

In the previous blog of this series, we saw how to use Date and Time range functions in analytic applications in SAP Analytics Cloud. In this blog, we will see how to pass filter values to a Drilldown WebI Report from an Analytic Application.

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

 

SAP Application Design helps in creating dynamic, interactive and customizable analytic applications. However, for CXOs, Managers, and Analysts, often a detailed report comes in handy to drill down deeper into problematic areas, to analyze and gain further insights into the data. In such cases, it is often required to pass parameters from the parent dashboard to the detailed drill-down report, so that the report shows data for the same values as selected in the dashboard.

 

WebI Prerequisites

To pass filter values from SAP Analytics Application Design to a drill-down WebI report, the dimensions for which parameter values need to be passed, need to be created as prompts in the WebI report. The parameters can be passed by appending ‘&’ to the report’s OpenDoc link.

 

OpenDocument Syntax

An OpenDocument URL is generally structured as seen below:

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

 

Once you have the drill-down WebI report ready, recreate the following steps in your Analytic Application:

 

1. Configure Selector Widgets

You can capture the filter values that need be passed to the WebI report by using Selector Widgets in the Analytic Application i.e. Dropdown, Radio Button Group and Checkbox Group. In this example, we will use Calendar Year as a single value prompt (DP_Year) and Rig Status (CG_RigStatus) as multiple values prompt.

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

 

2. Saving Filter Values using Global Variables

The selector widgets can be populated with values using the onInitialisation() script for live connections and by using the Builder Panel for imported connections.

a. Defining Global Variables

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

 

b. Capturing the selected values in Global Variables
sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

onSelect() event of Year dropdown

 

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

onSelect() event of Rig Status Checkbox Group

 

3. Hyperlink to the WebI Report

You can use an Image or Shape Widget to dynamically set the hyperlink statement using scripting at run-time. As seen below, here we use a Shape Widget to set the hyperlink to be used to drill-down to the WebI report.

Note: setHyperlink() API is not available for the Text Widget. Hence it cannot be used in the current scenario.

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

 

4. Setting the Hyperlink Dynamically

The hyperlink should change dynamically based on the filters selected. To do that, we would need to capture the filter values whenever the chart changes, because of the changes in the filter selection. So, we capture filter values in the OnResultChanged() event which gets called whenever the result set displayed by the chart changes.

Note: getSelectedKeys() returns value as a String Array, which cannot be appended to a hyperlink. So you would need to use a Control Statement to convert the String Array to String. For more information on SAP Analytics Cloud Control Statements please refer to the blog SAP Analytics Cloud – Application Design Series: 5 – Introduction to Scripting

sap-analytics-cloud-application-design-series-14-passing-parameters-to-drilldown-webi-report

 

Reach out to us here today if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post SAP Analytics Cloud Application Design Series: 14 – Passing Parameters to Drilldown WebI Report appeared first on Visual BI Solutions.

R Visualizations in SAP Analytics Cloud Series: 3 – Dynamic Swap, Sort and Rank

$
0
0

This blog series is about leveraging R Visualizations in SAP Analytics Cloud. In the previous blog, we learnt how to add interactivity and customizations to R widgets. In this blog, we will see how to swap dimensions / measures, sort and rank visualizations dynamically in SAP Analytics Cloud using R.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

 

Initializing the Data Frame

Add an R Visualization to the canvas and create a data frame as required. For the steps to create a data frame, visit this blog.

For the sake of simplicity, let us consider the dimensions and measures highlighted in the snapshot below to demonstrate swap, sort and rank at runtime. You can also include various other options like adding dimensions / measures, flipping axis etc per your needs.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

R Visualization structure with two data frames initialized

 

Configuring the R widget

The visualization uses ggplot2 and dplyr libraries with standard evaluation (SE) semantics to achieve dynamic parameter passing from the SAP Analytics Cloud environment. Let us create a Lollipop chart using R to visualize the data, although any chart type can be used. Follow the script in the snapshot below, to have the widget configured.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

R Script for creating a Lollipop R widget with swap, sort and rank functionalities

 

Note how geom_segment() along with geom_point() are used in combination to create a Lollipop chart. You can play around with the script and then click on Apply to add the visualization to the canvas.

The R variables used in the script are described in the table below. They must be declared / defined in the onClick and onInitialization events of the analytic application, to be used within the R script.

These variables act as intermediates between the SAP Analytics Cloud environment and the R script using setString() and setNumber() functions to set the R variables from the respective dropdowns.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

R variables with corresponding action item

 

Configuring the Dropdowns

Let us now add the necessary dropdowns

  1. Swap Dimension
  2. Swap Measure
  3. Sort
  4. Show Top / Bottom

The snapshot below shows the layout of the dropdowns, input field and button on the canvas.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

 

After adding the dropdowns, the onClickevent of the Apply button is configured to get user selections from the respective dropdowns and set them to the R widget. The following snapshot shows the script for the same.

Note that if the dimension/measure names have spaces in them, they need to be enclosed in ” `” as seen in line no. 7 in the script below.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

Button onClick event showing script to get and set members to R widget

 

Default configuration

All the dropdowns and the input fields are configured in the onInitilization event of the application to have default values. The application is set to show the top five liquor Category based on Volume Sold (Liters) sorted in ascending order, as seen below.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

Application onInitialization event to configure dropdowns and R widget

 

The table below shows the default values that are set to the R variables.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

 

When the application is run, the visualization is rendered with the default options set as seen below.

r-visualizations-in-SAP-analytics-cloud-series-3-dynamic-swap-sort-rank

 

In subsequent blogs of this series, we will continue to learn more about leveraging R Visualizations in SAP Analytics Cloud. Stay Tuned!

Reach out to our team here if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post R Visualizations in SAP Analytics Cloud Series: 3 – Dynamic Swap, Sort and Rank appeared first on Visual BI Solutions.


R Visualizations in SAP Analytics Cloud Series: 2 – Adding Interactivity and Customizations to R widgets

$
0
0

This blog series is about leveraging R Visualizations in SAP Analytics Cloud. In the previous blog, we learnt how to create R widgets in an analytic application and perform dynamic filtering on them. In this blog, let us see how to add interactive tooltips, data labels and other customizations to enhance the user experience in R charts of SAP Analytics Cloud.

 

The plotly library

While most R packages render visuals as PNG images, plotly being a java script graphing library allows the user to create interactive charts. Here you can see the fully customized R Visualization using the plotly library.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

Now, let us see how to add a simple plotly chart and customize it for SAP Analytics Cloud Stories / Applications to suit our requirements. Though the customizations discussed in this blog can be used for all chart types, for easier understanding, we’ll consider using a bar chart.

 

Adding interactivity to R widgets using the plotly library

plot_ly() function is used to plot various visualizations. The following script renders a simple bar chart. Refer to the previous blog to learn about configuring R widgets in SAP Analytics Cloud.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

Executing the script renders an unsorted bar chart along with tooltip. By default, tooltip is configured to show the measure value and the name of the member.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

Also, there are various options in the mode barincluding zoom, export, pan, auto scale etc. It even allows the user to compare two data points on hover, like native charts.

 

Customizing R Widgets

Although the chart plotted using plot_lyprovides various features to the end user like mode bar, note that the chart currently looks drastically different from other native charts in any Story/App and also that the y-axis labels are cut by default as the label length is too large to fit.

One need not go through the hassle of these aesthetic discrepancies to leverage the capabilities of R in SAP Analytics Cloud. There are ways to extensively customize the looks of the chart as explained below.

 

Plotly provides multiple functions and numerous tweak options that make the visualizations highly customizable. Along with the default plot_ly() function, let’s make use of layout() and config() functionalities to make it aesthetically appealing as the native ones.

 

1. Leveragingplot_ly() for interactivity

Following are the options that can be customized under plot_ly() function:

a. Sort

Use the function reorder() to sort bar based on a measure.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

b. Tooltip

Tooltip can be configured by assigning any measure to hoverinfo option.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

c. Bar Color

The color of the bar can be easily changed within marker option.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

2. Enhancing the plot using layout()

a. Margin

Margins can be customized within layout() function using margin option.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

b. Auto Size

autosize = Twill auto size the chart.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

c. Axis Line

Linewidth option is used to assign line width to axis.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

d. Axis Labels and Gridlines

You can remove axis labels by setting showticket labels to false and hide the gridlines by setting the option showgrid to false.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

e. Format Tooltip

The tooltip added previously can be formatted here using hoverlabel.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

f. Font

Properties like color, family can also be changed.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

3. Customizing Data Labels

The function add_annotations() can be used to show any measure and can also be formatted using options like xanchor to align the data labels.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

 

4. Customizing Mode Bar

displayModeBar option is used to set the mode bar option using the config() function.

 

The code above customizes R widgets in SAP Analytics Cloud, to resemble native charts. Apart from the options mentioned above you can find plenty more for widget customizations here.

These customizations can also be applied to different chart types as shown below.

 

In the next blog of this series, we will look at how to dynamically swap, sort and rank visualizations in SAP Analytics Cloud using R.

Reach out to our team here if you are interested in evaluating if SAP Analytics Cloud is right for you.

Subscribe to our Newsletter

The post R Visualizations in SAP Analytics Cloud Series: 2 – Adding Interactivity and Customizations to R widgets appeared first on Visual BI Solutions.

Exploring the Concept of Dynamic Hierarchy in SAP HANA

$
0
0

Hierarchy is nothing but an arrangement of nodes in a specific order to describe a business function. SAP HANA supports two types of Hierarchies, namely, Level Hierarchy and Parent-Child Hierarchy. This blog explores more in detail about Dynamic Hierarchy, its purpose and how the concept is supported by SAP HANA.

Though most hierarchies are rigid, there are some that can be rearranged/reorganized to address a different business need. Let’s look at an example.

exploring-concept-dynamic-hierarchy-sap-hana

 

The above-mentioned hierarchy depicts a typical Organization Hierarchy across: Organization -> Department -> Projects -> Time Line -> Teams -> Resources.

We can also reorder the same hierarchy structure to get a different insight. Here, we have reordered the structure to a Resource specific format. This will display the different Projects and Teams where each Resource has worked across various months. Typically, we would create two different hierarchies for the scenarios explained.

exploring-concept-dynamic-hierarchy-sap-hanaexploring-concept-dynamic-hierarchy-sap-hana

 

A single hierarchy that could allow you to decide the order and level of nodes dynamically from a report, is known as Dynamic Hierarchy. This report was developed using Lumira Designer and SAP HANA.

r-visualizations-sap-analytics-cloud-series-2-adding-interactivity-customizations-r-widgets

 

Steps to create Dynamic Hierarchy in SAP HANA

Dynamic Hierarchy is possible only by leveraging Level Hierarchy in SAP HANA. It is not possible with Parent-Child Hierarchy.

Create a level-based hierarchy model with 5 levels of nodes. Let’s see how we can convert this static hierarchy into a dynamic one.

There are five columns in our Static Level-based Hierarchy model: Department, Project, Period, Role, Person. Now, create five calculated columns and five input parameters – each one corresponding to a level (column) of the hierarchy (Note: If you have ‘N’ number of levels, then you must create ‘N’ Calculated columns and ‘N’ Input Parameters).

exploring-concept-dynamic-hierarchy-sap-hana

 

From the report, the user can pass a different set of values to each input parameter (each value corresponds to each column). Depending upon the input parameter values, calculated columns will fetch the respective levels (Columns) from the table/view.

exploring-concept-dynamic-hierarchy-sap-hanaexploring-concept-dynamic-hierarchy-sap-hana

 

Add these calculated columns to the Level Hierarchy.

exploring-concept-dynamic-hierarchy-sap-hana

 

And finally, your Dynamic Hierarchy in SAP HANA is ready!

 

To know more about SAP HANA offerings from Visual BI Solutions, click here

Subscribe to our Newsletter

The post Exploring the Concept of Dynamic Hierarchy in SAP HANA appeared first on Visual BI Solutions.

Cascading Filters in SAP HANA using Value Help

$
0
0

Cascading filters are a combination of filters where the selection of one filter determines the list of values available for selection in the subsequent filters. The primary objective is to ensure a better user experience by providing users with valid combinations of input parameters.

In this blog, lets look at how cascading filters can be achieved in SAP HANA through Value Help (commonly called F4 help) which is a property of Input Parameters. Given below are some scenarios of cascading filters being consumed in Analysis for Office and Lumira Designer.

 

SAP HANA Cascading Filter consumed in Analysis for Office

cascading-filters-in-sap-hana-using-value-help

 

SAP HANA Cascading Filter consumed in Analysis for Office

cascading-filters-in-sap-hana-using-value-help

 

SAP HANA Cascading Filter consumed in Lumira Designer

cascading-filters-in-sap-hana-using-value-help

 

HANA Table used in this example

cascading-filters-in-sap-hana-using-value-help

 

We require 2 HANA views for this purpose. Let’s start with the first view – we’ll name it ‘CV_Value_Help’. We will build this view as a dimension based calculation view because it will be used only for value help.

We then add 4 input parameters (IP_CATEGORY, IP_COUNTRY, IP_STATE, IP_CUSTOMERNAME) and assign them as filters to their corresponding columns.

cascading-filters-in-sap-hana-using-value-help

cascading-filters-in-sap-hana-using-value-help

 

This will return 2 rows as seen below:

cascading-filters-in-sap-hana-using-value-help

 

Now let’s create the second calculation view and name it ‘Cascading_Multiple_IP’. We follow a similar procedure and create 4 Input parameters but this time we have the value help column mapped to the previous view ‘CV_Value_Help’.

cascading-filters-in-sap-hana-using-value-help

 

Once the Input parameters are created with the respective columns, we can add them as filter to the existing columns. Each input parameter of the current view is then mapped to the previous view. For instance, for the IP_CASCADE_CATEGORY input parameter we map the other 3 input parameters with ‘CV_Value_Help‘ view. Other input parameters are also mapped in a similar manner to implement cascading across all filters.

cascading-filters-in-sap-hana-using-value-help

 

Now, ‘Cascading_Multiple_IP’ view can be consumed in a frontend application to achieve the cascading filter functionality.

cascading-filters-in-sap-hana-using-value-help

 

You can learn more about Visual BI’s SAP HANA Offerings here.

Subscribe to our Newsletter

The post Cascading Filters in SAP HANA using Value Help appeared first on Visual BI Solutions.

Building Custom Visualizations in Tableau – Bump Chart

$
0
0

Tableau has a rich collection of charts that can be used for different types of analysis. Apart from the built-in charts, developers can custom create new ones using the features of Tableau.

 

Bump Charts

Bump charts are very useful for comparative analysis and in understanding trends over a period of time. In this blog let’s learn the steps to build a bump chart, with an example scenario – comparing sales trends in different regions across months of 2017.

1. Build the following chart:

building-custom-visualizations-in-tableau-bump-chart

 

2. Convert the ‘SUM(Sales)’ green pill in Rows to a Rank table calculation:

building-custom-visualizations-in-tableau-bump-chart

 

3. The problem here – as seen in the image above, is that ranking has been done based on the sales across the months, for each region. But we need the ranking to be done based on sales across the regions, for each month. So, we would need to change the level at which the table calculation takes place. Change the ‘Compute Using’ option for ‘SUM(Sales)’ from ‘Table (Across)’ to Compute Using Region. After changing the computation level, we get the following output:

building-custom-visualizations-in-tableau-bump-chart

 

4. Duplicate the ‘SUM(Sales)’ pill and place it beside the original pill in Rows and create a dual axis:

building-custom-visualizations-in-tableau-bump-chart

 

5. Convert ‘Marks’ of one of the Measure Axis to ‘Circle’ and add the ‘Rank of SUM(Sales)’ as the ‘Text’ for the Labels. Change the ‘Alignment’ of the labels’ text as shown below and hide the axis on the right side of the chart:

building-custom-visualizations-in-tableau-bump-chart

 

6. Finally, edit the axis and reverse the scale. Right click on the axis and select the option to bring the marks to the front. We get the final output as shown below:

building-custom-visualizations-in-tableau-bump-chart

 

We can now easily compare the performance of each region in a month, based on the Rank value shown in the respective labels, and track the growth of the regions over a time period.

We can create many more custom visualizations in Tableau by using its built-in features, which we will focus on, in our subsequent blogs.

 

You can catch up on our other Tableau blogs here.

Subscribe to our Newsletter

The post Building Custom Visualizations in Tableau – Bump Chart appeared first on Visual BI Solutions.

Drag and Drop Components at Runtime in SAP Lumira Designer – Part 2

$
0
0

In the previous blog, we discussed how to Drag and Drop Components at Runtime in SAP Lumira Designer. In SAP Design Studio 1.6 and versions prior to it, users could leverage the ‘Online Composition’ feature to create dynamic dashboard layouts on the fly. But the ‘Online Composition’ feature has been deprecated since Lumira Designer 2.0. This blog demonstrates how users can generate container components with drag and drop capabilities at run time, providing them with a near-Self-Service BI experience.

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

1. Usage of ‘COMPONENTS’

SAP Lumira Designer provides a ‘Technical Component’ called ‘COMPONENT’ that can be used to create components on the fly. We can use this to generate our container components that can hold the dragged visualizations.

Create a dashboard with the required charts / visualizations and add a ‘COMPONENT’ from ‘Technical Components’.

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

Now the dashboard looks like the screenshot below with the following components:

  • VBX ColumnBar Chart
  • Native Crosstab
  • Native Chart
  • VBX Data Utility component

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

2. Creating Dynamic Panels

Add a button to dynamically create a new panel for each user click. Write the script below to create panel of desired height and width.

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

On the click of this button, a new panel is created into which the draggable components can be dropped. Each row can have 2 panels (in our scenario) and they wrap down to a new row for better viewability.

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

3. Assignment of dynamic attribute

To bind the functions to the events, the ID of the container panels is a prerequisite. In Lumira, the ID of the panels generated using the COMPONENT Technical Component are automatically named as PANEL_n_panel1, where nincrements for each panel are generated. Therefore, the number of panels generated must be tracked each time a panel is added, which can be later used to assign attributes to the panels. We can use the DSXSetCode() function of the script box as shown below to use the value from Lumira in the script box code,

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

Now we should be dynamically binding the functions mentioned in the previous blog to the newly created panels. PANEL_”+div_input+”_panel1 gives the ID that is generated during run-time for the components for which the ‘ondrop’ and ‘ondragover’ attributes are added and the respective functions are bound.

 

Self-Service BI experience

Users can now have an enhanced Self-Service BI experience as they can drag and drop the components into the dashboard.

drag-and-drop-components-at-runtime-in-sap-lumira-designer-part-2

 

Please reach out to us here for Visual BI’s SAP Lumira Service Offerings and learn more about Visual BI’s Self-Service BI offerings here.

Subscribe to our Newsletter

The post Drag and Drop Components at Runtime in SAP Lumira Designer – Part 2 appeared first on Visual BI Solutions.

Viewing all 989 articles
Browse latest View live