Nulls are evil (or at the very least mischievous)

If you have been involved in hands on database development for a while, or in any kind of BI / reporting initiative, you probably already know this. Nulls are evil (or at the very least mischievous).
A null value in a database column is different than a blank, a white space or any multitude of other invisible characters. It is truly void, nothingness, the abyss…
The implications of leaving null values can be very confusing for downstream reporting development. Null values will cause problems when trying to aggregate numeric values, they will make joins fall apart and complicate the querying of any table where null values are involved. As a case in point, I set out to demonstrate below how four major databases handle (or not handle..) null values. The answer is the same in each case.
The use case I created is simple:
I created a table with 8 records in it. 2 of the records are null values. Then I queried the database with three simple questions:
• How many values are there where the value is not ‘a’ – since one record has a value of a, I expected the answer to be 7
• Given a wild card search on the value, provide all records – I expected to get 8 records back from a “wild card” search on all records
• How many records are there in the table, counting the value field – I expected 8
All databases gave the same answer:
How many values are there where the value is not ‘a’ – 5. The 2 null values were not considered
Given a wild card search on the value, provide all records – 6. The null records were ignored.
How many records are there in the table, counting the value field – 6. Yes, the database simply does not count the null values, as if they did not exist. Real nothingness…
Below are the test results in Oracle, SAP HANA, mySql and Sql Server. Beware of nulls…
create table test_nulls (column_a varchar(255));
insert into test_nulls (column_a) values (‘a’);
insert into test_nulls (column_a) values (‘b’);
insert into test_nulls (column_a) values (‘c’);
insert into test_nulls (column_a) values (null);
insert into test_nulls (column_a) values (null);
insert into test_nulls (column_a) values (‘f’);
insert into test_nulls (column_a) values (‘g’);
insert into test_nulls (column_a) values (‘h’);

Oracle:
select * from test_nulls;
ora01
select * from test_nulls where column_a <> ‘a’;

ora02
select * from test_nulls where column_a like ‘%’;
ora03
select count(column_a) from test_nulls;
ora04
HANA:
create column table test_nulls (column_a varchar(255));
insert into test_nulls (column_a) values (‘a’);
insert into test_nulls (column_a) values (‘b’);
insert into test_nulls (column_a) values (‘c’);
insert into test_nulls (column_a) values (null);
insert into test_nulls (column_a) values (null);
insert into test_nulls (column_a) values (‘f’);
insert into test_nulls (column_a) values (‘g’);
insert into test_nulls (column_a) values (‘h’);

select * from test_nulls;
hana01
select * from test_nulls where column_a <> ‘a’;
hana02
select * from test_nulls where column_a like ‘%’;
hana03
select count(column_a) from test_nulls;
hana04
MySQL
select * from test_nulls;
mysql01
select * from test_nulls where column_a <> ‘a’;
mysql02
select * from test_nulls where column_a like ‘%’;
mysql03
select count(column_a) from test_nulls;
mysql04
Sql Server
select * from test_nulls;
sqlserver01
select * from test_nulls where column_a <> ‘a’;
sqlserver02
select * from test_nulls where column_a like ‘%’;
sqlserver03
select count(column_a) from test_nulls;

sqlserver04

Posted in BI At Large, Data Warehousing, SAP HANA | Tagged , | Leave a comment

Oscars dashboard

oscars

 

I LOVE movies. I am fascinated by the craft of creating movies. It’s a field that combines technology and imagination in very unique ways and where science and art interact to create an amazing product (well, not always, but many times..). As millions of others, I enjoy the yearly Oscars awards which allows  us who enjoy watching the movies to view the people who are responsible for the creation we enjoy in a different light. So, this year, I set out to explore the Oscars from a BI perspective. I began by going to the Oscars.org web site and obtained an extract of the nominations and winners data since the first award show in 1927. The Oscars.org web site makes this data public, however the data is presented in a format that is very un friendly for analysis beyond observation on a web page. So, the first task I faced was transforming the data into a format I can load into a BI tool that can be used to mine the data.

Next, I leveraged some of the new features available in SAP Web Intelligence (Webi) 4.1 to create a visual interactive dashboard that is completely HTML based (will work on any device) and is both visually appealing as well as interactive (with the Search capabilities). Use the Search functionality to look for your favorite actor/s or any other film related terms to see how many times they were nominated and/or won. Feel free to click the image above to link to the live Oscars dashboard. If you have other interesting ideas for Oscars related data visualization, please let me know, I would be delighted to share my Oscars data file!

Posted in BI At Large, Data visualization, HTML5, SAP Mobile BI, Web Intelligence | Tagged , | 2 Comments

Signal and Noise inspired dashboard

Reading Nate Silvers’ “The Signal and the Noise: Why So Many Predictions Fail — but Some Don’t” (amazon link to book) inspired me to create this dashboard to articulate the idea of searching for signal and meaningful insights in the noise of the data. This is the html5 output from an xlf, feel free to download the .xlf used to produce it

Posted in BI At Large, Data visualization, HTML5, SAP Mobile BI, Xcelsius | Tagged , , | Leave a comment

How to add animation in SAP Dashboard (Xcelsius) on Mobile

One of SAP Dashboards (Xcelsius) most attractive features is animation. The way components animate in the desktop version of compiled dashboard and the ability to animate various visualizations and include animated files is key to the “sleekness” associated with so many dashboard applications. However, when compiling .xlf files for mobility in the mobile app, animation is not currently supported. This can be worked around, to a degree, by applying dynamic visibility to a series of images that can create the effect of animation. This technique can help bring mobile dashboards alive, and can be used to create anything from “please wait while data is loading…” spinners to Cylon heads with moving red eye. In the example below, I used the html5 output files of the .XLF file you may download to see how this animation effect is achieved and can be used in your SPA mobile app.

Posted in BusinessObjects 4.0, Data visualization, HTML5, SAP Mobile BI, Xcelsius | 2 Comments

Book review: Software Development on the SAP HANA Platform

Recently, I have been asked by Packt Publishing to review the e-book “Software Development on the SAP HANA Platform” by Mark Walker.

I found this to be a great introductory book for anyone interested to learn the basics about HANA and gain good, hands on understanding of the various areas of HANA development. The book include some good exercises and detailed, step-by-step instructions that can help any developer who is starting out with HANA take the first few steps in a variety of topics, such as: modeling, security, hierarchies, data sourcing and development on the XS engine. The examples are clear and simple, and the language and descriptions are easy to follow, simple to understand and well-articulated.

Overall, I would recommend this book for getting started and becoming familiar with HANA basic concepts. However, this is probably not a fit for experienced developers who have been using the technology for a while and are looking for more advanced in-depth materials. I would actually recommend the various product manuals for gaining much of that insight..

Posted in SAP HANA | Tagged | Leave a comment

Predicting the news with SAP HANA

If you knew who is dominating the news yesterday, would you be able to predict who will dominate the new tomorrow…? This is what I have set out to try leveraging some new data technologies available to us today…

The big data revolution continues to quietly but surely change the course of our evolution. It’s changing how we behave, how we interact, and how we think. And it’s not because the concepts of big data are new or unprecedented, it’s because it is now possible to do things that technologists, entrepreneurs, scientists and analysts could only dream of a few short decades ago. It’s not that the ideas of harnessing information for decision making or predictions is anything new, it’s that today, it is possible, accessible and financially feasible to perform the levels of calculations and computation needed to accomplish this.
SAP HANA is a great case in point, and I have been working with some of my Colleagues at Cleartelligence on an interesting use case to demonstrate the accessibility and feasibility of big data concepts: predicting the news.
This is an especially exciting topic and time for me, as I can put together several of my passions and skills in new ways. Combining my IT degree and background, my psychology degree and social studies interests (high school major..) in ways I could never imagine before…
I started with a hypotheses, or a premise. And the premise is this: can the news, as reported in popular media channels, be predicted? My hypothesis is that yes, to a degree. If we could amass enough data, we could look for patterns or news categories, and see if we can identify seasonality in news categories. For example, can we correlate seasonality with certain types of news? Are there any sentiments patterns we can use to predict tomorrow’s news?
Just several years ago, being able to test out such a concept was completely unfeasible for anyone but a few individuals who had access to the world’s most expensive computing equipment. Today, we can tackle such a project with commodity hardware and enterprise software available to any organization.
So, the first step in the project is to collect data. Of course, the more data we gather, the better our analysis can become, and more accurate. And this is one of the key reasons why big data is becoming so pervasive today, the technology that allows us to collect sufficient amounts of data and process them in an efficient and economic manner, to produce high quality results.
To collect my data, I wrote a small java program that crawls several of the top news web sites, and scrapes their front page into our HANA database. This program runs nightly, and as such, each day, our news database grows by several multitudes, as the number of web sites scraped.

Next, I used HANA text analysis functions to index the web sites data, which is stored as BLOB. HANA can automagically process free form text using the text analysis functions and has several configuration options to extract meaning from the unstructured BLOBS. Some of the options include LINGANALYSIS_BASIC, LINGANALYSIS_FULL or EXTRACTION_CORE. Each processing option provides different capabilities from parsing individual words, to using complex linguistic analysis and pattern matching to retrieve specific information about customers’ needs and perceptions.

The EXTRACTION_CORE option proved extremely insightful as it not only extracted meaning out of the BLOB, it also categorized it into some pre-defined categories, easy to use and simple.

This data gathering program has only been running for a short period, and I plan to continue updating this topic with additional insights, data visualization techniques and examples of interesting usage of this technology as more data is gathered.

Posted in SAP HANA | Tagged , , | Leave a comment

H+ Chronological Timeline Chart

The google visualization API latest release this August, added a new timeline chart to the impressive and robust html5 charts already available in this free API. This chart type allows the creation of timeline charts with ease. It can be used to produce gantt charts, calendars, and all sorts of interesting timeline visualization. I took it for a spin using my H+ Digital web series time line (see original post here) and was happy with how simple and quick it was to implement. This example organizes the H+ episode chronologically, as the actual episode order is non-chronological, and provides an interesting view of when most of the action actually occurs…

Posted in Data visualization, HTML5 | Tagged , , , | Leave a comment

Setting up a single report hyperlink in an SAP dashboard that works on desktop and on mobile

As of SAP BI 4.0 SP7, there are still two different APIs for opening reports over URL through linking on mobile and on desktop. The desktop API is the true and tried openDoc API that has been around for many years. Mobile, introduced the sapbi links that work within the mobile app.
So, when designing a dashboard that has a link to a report, there is no simple way to configure the URL button URL to support both types of links. The dashboard itself doesn’t “know” it is being opened in the mobile app or on a desktop computer, so does not automatically convert openDoc URLs to sapbi (webi seems to do that well). To work around this problem, you can use a hyperlink to the dashboard, and pass a simple flash variable to indicate it has been opened in the desktop mode and the openDoc url should be used, not the sapbi, and vise versa. There are other advantages to using a heprlink to the dashboard such as passing the host information to prevent the need to change that in the xlf on migrations, as well as specifying the dimensions of the swf object, something that cannot be done when opening the swf object directly from the BI Launchpad. Here are the steps to configure a single URL button to work for both desktop and mobile

1. First, you will need to craft both links. You can use the share functionality in the mobile app to generate the link for you and the BI Launchpad to generate the openDoc link. Your links will look something like this :
For mobile: sapbi://OpenDoc/?authType=secEnterprise&default=yes&connection_name=myconn&server_url=https://myserver.com&ConnectionType=BOEConnection&cms=myCMS&type=webi&iDocID=AVX4JiBBBoFPk123456&reportPageNumber=0&sReportName=Sales%20Summary
For desktop:
https://myserver.com/BOE/OpenDocument/opendoc/openDocument.jsp?iDocID= AVX4JiBBBoFPk123456&sIDType=CUID&sReportName=Sales%20Summary

2. In the dashboard, add your flash variables. You may use a host info variable to pass info like the host name, port and protocol to avoid having to change those in the XLF as you migrate it between environments. For the purpose of determining the dashboard has been opened in desktop, I added a flash variable called OpenedInDesktop

3. In the model create a formula that evaluates the value of the OpenedInDesktop flash var. If it is set as expected from the desktop, set the formula value to be the desktop link, otherwise, make it the mobi link

4. Add the URL button top the dashboard and bind it’s URL property to formula created to evaluate which link to use based on the flash var

6. Export the dashboard to HTML format on your computer and edit the HTML file produced by dashboard as follow: add your server host info for the value of the HostInfo flash var and add YES as the value of the OpenedInDesktop flash var. Note there are two places to set each value as highlighted in the image below. Place this file on your BO server web server. Note the location, you will create a hypelink for it. For example, you can place it in Tomcat root directory

7. Finally, configure a hyperlink object in the BI Launchpad to the dashboard and place that object in the folder where users will be looking for the dashboard link

And that is it! When users open the dashboard from the BI Launchpad on their desktop, the link will work as openDoc, and when they launch the dashboard from the mobile app, the link will work with as sapbi

Posted in BusinessObjects 4.0, SAP Mobile BI, Xcelsius | Tagged | 3 Comments

Tableau On BO – Setting up live connection from SAP Webi report to Tableau

I recently received several questions about connecting Tableau to SAP BI (Business Objects) as a data source. Tableau already has direct connectivity to HANA, however many users out there are looking for ways to connect their existing webi (web intelligence) reports to Tableau. So, I set to look for some solutions to this problem and was able to develop two good solutions that are automated and allow users to leverage the investment they already made in setting up webi reports through Tableau in an automated fashion.
I started out by examining the data connections available in Tableau and looking for possible candidates I can use to connect to SAP BI and webi. Two options caught my attention immediately as viable connectivity options, since they are relatively open: the “Other Databases (ODBC)” connection and the “OData” connection.
My first course of investigation was around the ODBC option. I know that Crystal Reports for example has an ODBC driver that can connect to an XML feed. So, in theory, I should be able to create an XML feed off a BI service published from a webi report, and connect using this driver.
Trying to use the Crystal Reports driver was a bust. Technically, this would have worked, but the driver, produced by DataDirect is licensed for use with Crystal only and trying to use it with Tableau (or any other client) produce an error message stating the driver can only be used for Crystal. If you happen to have access to a DataDirect or other XML ODBC driver, you can convert the BI Service SOAP response into an XML feed (see jsp code below), and use that to build a DSN you can connect Tableau to. In my searches, I was only able to find commercially licensed XML ODBC drivers, so my second approach, writing my own OData producer became more relevant.
OData is a relatively new, but very popular, internet data exchange protocol and it defines ways to send and request information from a service. There are several implementations for it, and I ended up choosing the java odata4j framework for my experiment.
I started out by setting up the odata4j project in Eclipse and getting two examples I was interested in working: the XmlDataProducerExample which describes how to read an XML feed and expose it as an OData producer, and the ExampleProducerFactory example which demonstrates how to expose the OData producer in Tomcat. I ended up using the Jersey based XML example in my working prototype, but I would most certainly look to host this in Tomcat directly in a real world situation.
So, using the odata4j examples, I created a java application and added a class to read my own XML feed instead of the provided one. I also removed the portions of the OData framework that would allow users to interact with the feed in terms of making changes to the data (not needed in our scenario, which is reporting only). So, my main class connecting the XML feed looks like so:
package org.odata4j.examples.producer;
import static org.odata4j.examples.JaxRsImplementation.JERSEY;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
import javax.xml.stream.XMLEventReader;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.events.XMLEvent;
import org.odata4j.core.OEntities;
import org.odata4j.core.OEntity;
import org.odata4j.core.OEntityId;
import org.odata4j.core.OEntityKey;
import org.odata4j.core.OExtension;
import org.odata4j.core.OFunctionParameter;
import org.odata4j.core.OProperties;
import org.odata4j.core.OProperty;
import org.odata4j.core.Throwables;
import org.odata4j.edm.EdmDataServices;
import org.odata4j.edm.EdmEntityContainer;
import org.odata4j.edm.EdmEntitySet;
import org.odata4j.edm.EdmEntityType;
import org.odata4j.edm.EdmFunctionImport;
import org.odata4j.edm.EdmProperty;
import org.odata4j.edm.EdmSchema;
import org.odata4j.edm.EdmSimpleType;
import org.odata4j.examples.AbstractExample;
import org.odata4j.examples.ODataServerFactory;
import org.odata4j.examples.producer.jpa.northwind.Customers;
import org.odata4j.exceptions.NotImplementedException;
import org.odata4j.producer.BaseResponse;
import org.odata4j.producer.CountResponse;
import org.odata4j.producer.EntitiesResponse;
import org.odata4j.producer.EntityIdResponse;
import org.odata4j.producer.EntityQueryInfo;
import org.odata4j.producer.EntityResponse;
import org.odata4j.producer.ODataProducer;
import org.odata4j.producer.QueryInfo;
import org.odata4j.producer.Responses;
import org.odata4j.producer.edm.MetadataProducer;
import org.odata4j.producer.resources.DefaultODataProducerProvider;
/**
* This example shows how to expose xml data as an atom feed.
*/
public class XmlDataProducerExampleRon2 extends AbstractExample {
public static final String endpointUri = "http://localhost:8010/XmlDataProducerExampleRon2.svc";
public static void main(String[] args) {
XmlDataProducerExampleRon2 example = new XmlDataProducerExampleRon2();
example.run(args);
}
private void run(String[] args) {
System.out.println("Please direct your browser to " + endpointUri + "/Customers");
// register the producer as the static instance, then launch the http server
DefaultODataProducerProvider.setInstance(new XmlDataProducer());
new ODataServerFactory(JERSEY).hostODataServer(endpointUri);
}
@XmlRootElement
public class CustomersList {
@XmlElement
Customers[] customers;
}
/**
* Sample ODataProducer for providing xml data as an atom feed.
*/
public class XmlDataProducer implements ODataProducer {
private final EdmDataServices metadata;
private XMLInputFactory xmlInputFactory;
public XmlDataProducer() {
// build the metadata here hardcoded as example
// one would probably generate it from xsd schema or something else
String namespace = "XmlExample";
Listproperties = new ArrayList();
properties.add(EdmProperty.newBuilder("state").setType(EdmSimpleType.STRING));
properties.add(EdmProperty.newBuilder("lines").setType(EdmSimpleType.STRING));
properties.add(EdmProperty.newBuilder("manager").setType(EdmSimpleType.STRING));
properties.add(EdmProperty.newBuilder("owned").setType(EdmSimpleType.STRING));
properties.add(EdmProperty.newBuilder("revenue").setType(EdmSimpleType.DOUBLE));
properties.add(EdmProperty.newBuilder("margin").setType(EdmSimpleType.DOUBLE));
properties.add(EdmProperty.newBuilder("recordID").setType(EdmSimpleType.DOUBLE));
ListentityTypes = new ArrayList();
EdmEntityType.Builder type = EdmEntityType.newBuilder().setNamespace(namespace).setName("Customers").addKeys("recordID").addProperties(properties);
entityTypes.add(type);
ListentitySets = new ArrayList();
entitySets.add(EdmEntitySet.newBuilder().setName("Customers").setEntityType(type));
EdmEntityContainer.Builder container = EdmEntityContainer.newBuilder().setName(namespace + "Entities").setIsDefault(true).addEntitySets(entitySets);
EdmSchema.Builder modelSchema = EdmSchema.newBuilder().setNamespace(namespace + "Model").addEntityTypes(entityTypes);
EdmSchema.Builder containerSchema = EdmSchema.newBuilder().setNamespace(namespace + "Container").addEntityContainers(container);
metadata = EdmDataServices.newBuilder().addSchemas(containerSchema, modelSchema).build();
xmlInputFactory = XMLInputFactory.newInstance();
}
@Override
public EdmDataServices getMetadata() {
return this.metadata;
}
/**
* Returns OEntities build from xml data. In the real world the xml data
* could be filtered using the provided queryInfo.filter.
* The real implementation should also respect
* queryInfo.top and queryInfo.skip.
*/
@Override
public EntitiesResponse getEntities(String entitySetName, QueryInfo queryInfo) {
EdmEntitySet ees = getMetadata().getEdmEntitySet(entitySetName);
URL url =null;
URLConnection urlConnection = null;
InputStream is=null;
try {
url = new URL("http://myserver.com/mycontext/getSource.jsp");
urlConnection = url.openConnection();
is = new BufferedInputStream(urlConnection.getInputStream());
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
XMLEventReader reader = null;
try {
// transform the xml to OEntities with OProperties.
// links are omitted for simplicity
reader = xmlInputFactory.createXMLEventReader(is);
Listentities = new ArrayList();
List> properties = new ArrayList>();
boolean inCustomer = false;
String id = null;
String data = null;
while (reader.hasNext()) {
XMLEvent event = reader.nextEvent();
if (event.isStartElement()) {
if ("customers".equals(event.asStartElement().getName().getLocalPart())) {
inCustomer = true;
}
} else if (event.isEndElement()) {
String name = event.asEndElement().getName().getLocalPart();
if ("customers".equals(name)) {
entities.add(OEntities.create(ees, OEntityKey.create(id), properties, null));
properties = new ArrayList>();
inCustomer = false;
} else if (inCustomer) {
if ("recordID".equals(name)) {
id = data;
}
properties.add(OProperties.string(name, data));
}
} else if (event.isCharacters()) {
data = event.asCharacters().getData();
}
}
return Responses.entities(entities, ees, null, null);
} catch (XMLStreamException ex) {
throw Throwables.propagate(ex);
} finally {
try {
if (reader != null) reader.close();
} catch (XMLStreamException ignore) {}
try {
is.close();
} catch (IOException ignore) {}
}
}
@Override
public CountResponse getEntitiesCount(String entitySetName, QueryInfo queryInfo) {
throw new NotImplementedException();
}
@Override
public EntitiesResponse getNavProperty(String entitySetName, OEntityKey entityKey, String navProp, QueryInfo queryInfo) {
throw new NotImplementedException();
}
@Override
public CountResponse getNavPropertyCount(String entitySetName, OEntityKey entityKey, String navProp, QueryInfo queryInfo) {
throw new NotImplementedException();
}
@Override
public void close() {}
@Override
public EntityResponse createEntity(String entitySetName, OEntity entity) {
throw new NotImplementedException();
}
@Override
public EntityResponse createEntity(String entitySetName, OEntityKey entityKey, String navProp, OEntity entity) {
throw new NotImplementedException();
}
@Override
public void deleteEntity(String entitySetName, OEntityKey entityKey) {
throw new NotImplementedException();
}
@Override
public void mergeEntity(String entitySetName, OEntity entity) {
throw new NotImplementedException();
}
@Override
public void updateEntity(String entitySetName, OEntity entity) {
throw new NotImplementedException();
}
@Override
public EntityResponse getEntity(String entitySetName, OEntityKey entityKey, EntityQueryInfo queryInfo) {
throw new NotImplementedException();
}
@Override
public EntityIdResponse getLinks(OEntityId sourceEntity, String targetNavProp) {
throw new NotImplementedException();
}
@Override
public void createLink(OEntityId sourceEntity, String targetNavProp, OEntityId targetEntity) {
throw new NotImplementedException();
}
@Override
public void updateLink(OEntityId sourceEntity, String targetNavProp, OEntityKey oldTargetEntityKey, OEntityId newTargetEntity) {
throw new NotImplementedException();
}
@Override
public void deleteLink(OEntityId sourceEntity, String targetNavProp, OEntityKey targetEntityKey) {
throw new NotImplementedException();
}
@Override
public BaseResponse callFunction(EdmFunctionImport name, Map params, QueryInfo queryInfo) {
throw new NotImplementedException();
}
@Override
public MetadataProducer getMetadataProducer() {
return null;
}
@Override
public > TExtension findExtension(Class clazz) {
return null;
}
}
}

Notice the line url = new URL(“http://myserver.com/mycontext/getSource.jsp”);

This is the .jsp (can be a servlet, or our Bogoboards Gateway) that converts a BI Service into a simple XML feed. This .jsp looks like so:

<%@ page language="java" contentType="text/xml; charset=UTF-8" pageEncoding="UTF-8"%>
<%@ page import="com.crystaldecisions.sdk.framework.IEnterpriseSession,
com.crystaldecisions.sdk.framework.CrystalEnterprise,
com.crystaldecisions.sdk.exception.SDKException, com.crystaldecisions.sdk.framework.ISessionMgr,
com.crystaldecisions.sdk.occa.security.ILogonTokenMgr, com.businessobjects.bcm.*,tableau_source_rk_pkg.*,
javax.xml.rpc.Service,java.net.URL, java.util.*, java.text.NumberFormat,
java.math.RoundingMode" %>
<%@ page trimDirectiveWhitespaces="true"%>
<%
try{
//in this example, hard coded BO account, in real life will integrate auth or obtain from form
String username = "myusername";
String password = "mypwd";
String token = "";
//Authenticate user and get BO session
IEnterpriseSession enterpriseSession = CrystalEnterprise.getSessionMgr().logon(username, password, "cmsname:6400", "secEnterprise");
String serSess = enterpriseSession.getSerializedSession();
ILogonTokenMgr tokenMgr = enterpriseSession.getLogonTokenMgr();
token = tokenMgr.getDefaultToken();
session.setAttribute("serSess",serSess);
session.setAttribute("EnterpriseSession",enterpriseSession);
session.setAttribute("token",token);
}
catch(Exception e){
System.out.println(new java.util.Date()+": error on .jsp: "+e);
response.sendRedirect("out.html");
}
%>
<%
String str = "";
String rowStr = null;
NumberFormat formatter = NumberFormat.getCurrencyInstance(java.util.Locale.US);
formatter.setRoundingMode(RoundingMode.HALF_EVEN);
formatter.setMaximumFractionDigits(0);
URL endpoint = new URL("http://server.com/dswsbobje/qaawsservices/queryasaservice/biws?cuid=SOMEBISERVICECUID&authType=secEnterprise&locale=en_US&timeout=60&ConvertAnyType=true");
BIServicesSoapStub stub = new BIServicesSoapStub(endpoint,null);
try{
GetReportBlock_tableau_source_rk parameters = newGetReportBlock_tableau_source_rk();
parameters.setGetFromLatestDocumentInstance(true);
//In this example, reading report cache data, can set setRefresh to true to hit the DB each time
//parameters.setRefresh(true);
parameters.setResetState(true);
QaaWSHeader request_header = new QaaWSHeader();
request_header.setSerializedSession((String)session.getAttribute("serSess"));
//Call the soap service and get the response
GetReportBlock_tableau_source_rkResponse res = stub.getReportBlock_tableau_source_rk(parameters, request_header);
//Iterate through the response and parse it out to xml format
//In this example, hard coded parsing, can use the header to parse out dynamically
java.lang.Object[][] table = res.getTable();
for (int i=0;i<table.length;i++) {
for (int x=0;x<table[i].length;x++) {
if(x==table[i].length-1) {
rowStr = rowStr+"<customers><state>"+(String)table[i][1]+"</state><lines>"+
(String)table[i][2]+"</lines><manager>"+(String)table[i][3]+"</manager><owned>"+(String)table[i][4]+"</owned><revenue>"+
(Double)table[i][5]+"</revenue><margin>"+(Double)table[i][6]+"</margin><recordID>"+(Double) table[i][0]+"</recordID></customers>";}
}
}
}
catch(Exception e) {
System.out.println(new java.util.Date()+": Error in get and display data: " + e);
}
str = "<?xml version='1.0' encoding='utf-8'?><customersList>"+
rowStr+
"</customersList>";
out.println(str); //output the xml
%>

These are the two main components of the solution. The rest is more setup, less code. So now, my java app calls a BI Service that is being converted to XML and exposes it as an OData producer, so ANY client that understands OData can read it. If you like to try out this OData service, you can point your Tableau to: http://bogoboards.com/gmap/jerseyproxy.jsp

The OData producer for this example is exposed through a proxy on Tomcat. The data might look familiar, it’s a webi report based on eFashion universe, limited to a couple of states and one line of clothing, to keep things light and simple for this example…

So, armed with this URL, I was now able to connect Tableau to my report based OData producer, make changes in webi to modify the data, and refresh the Tableau dashboard to automatically update the dashboard! Here are some screen shots of the process I used to test out the live refresh functionality:

1. Connect to the Odata Producer from Tableau

2

2. Data from OData producer via webi is available in Tableau to work with now

3. Setup a simple chart

4. Now, go to the webi report that is exposed via the OData producer

5. Open the webi report

6. Modify the query to add more data

7. Save the modified webi

8. Now, switch back to Tableau and refresh the OData producer data source

9. Observe the new data flows through

 

 

Posted in BI At Large, BusinessObjects 4.0, Data visualization, Web Intelligence | Tagged , , , , | 4 Comments