Predicting the news with SAP HANA

If you knew who is dominating the news yesterday, would you be able to predict who will dominate the news tomorrow…? This is what I have set out to find out, leveraging some new data technologies available to us today…

The big data revolution continues to quietly but surely change the course of our evolution. It’s changing how we behave, how we interact, and how we think. And it’s not because the concepts of big data are new or unprecedented, it’s because it is now possible to do things that technologists, entrepreneurs, scientists and analysts could only dream of a few short decades ago. It’s not that the ideas of harnessing information for decision making or predictions is anything new, it’s that today, it is possible, accessible and financially feasible to perform the levels of calculations and computation needed to accomplish this.
SAP HANA is a great case in point, and I have been working with some of my Colleagues at Cleartelligence on an interesting use case to demonstrate the accessibility and feasibility of big data concepts: predicting the news.
This is an especially exciting topic and time for me, as I can put together several of my passions and skills in new ways. Combining my IT degree and background, my psychology degree and social studies interests (high school major..) in ways I could never imagine before…
I started with a hypotheses, or a premise. And the premise is this: can the news, as reported in popular media channels, be predicted? My hypothesis is that yes, to a degree. If we could amass enough data, we could look for patterns or news categories, and see if we can identify seasonality in news categories. For example, can we correlate seasonality with certain types of news? Are there any sentiments patterns we can use to predict tomorrow’s news?
Just several years ago, being able to test out such a concept was completely unfeasible for anyone but a few individuals who had access to the world’s most expensive computing equipment. Today, we can tackle such a project with commodity hardware and enterprise software available to any organization.
So, the first step in the project is to collect data. Of course, the more data we gather, the better our analysis can become, and more accurate. And this is one of the key reasons why big data is becoming so pervasive today, the technology that allows us to collect sufficient amounts of data and process them in an efficient and economic manner, to produce high quality results.
To collect my data, I wrote a small java program that crawls several of the top news web sites, and scrapes their front page into our HANA database. This program runs nightly, and as such, each day, our news database grows by several multitudes, as the number of web sites scraped.

Next, I used HANA text analysis functions to index the web sites data, which is stored as BLOB. HANA can automagically process free form text using the text analysis functions and has several configuration options to extract meaning from the unstructured BLOBS. Some of the options include LINGANALYSIS_BASIC, LINGANALYSIS_FULL or EXTRACTION_CORE. Each processing option provides different capabilities from parsing individual words, to using complex linguistic analysis and pattern matching to retrieve specific information about customers’ needs and perceptions.

The EXTRACTION_CORE option proved extremely insightful as it not only extracted meaning out of the BLOB, it also categorized it into some pre-defined categories, easy to use and simple.

This data gathering program has only been running for a short period, and I plan to continue updating this topic with additional insights, data visualization techniques and examples of interesting usage of this technology as more data is gathered.

Posted in SAP HANA | Tagged , , | Leave a comment

H+ Chronological Timeline Chart

The google visualization API latest release this August, added a new timeline chart to the impressive and robust html5 charts already available in this free API. This chart type allows the creation of timeline charts with ease. It can be used to produce gantt charts, calendars, and all sorts of interesting timeline visualization. I took it for a spin using my H+ Digital web series time line (see original post here) and was happy with how simple and quick it was to implement. This example organizes the H+ episode chronologically, as the actual episode order is non-chronological, and provides an interesting view of when most of the action actually occurs…

Posted in Data visualization, HTML5 | Tagged , , , | Leave a comment

Setting up a single report hyperlink in an SAP dashboard that works on desktop and on mobile

As of SAP BI 4.0 SP7, there are still two different APIs for opening reports over URL through linking on mobile and on desktop. The desktop API is the true and tried openDoc API that has been around for many years. Mobile, introduced the sapbi links that work within the mobile app.
So, when designing a dashboard that has a link to a report, there is no simple way to configure the URL button URL to support both types of links. The dashboard itself doesn’t “know” it is being opened in the mobile app or on a desktop computer, so does not automatically convert openDoc URLs to sapbi (webi seems to do that well). To work around this problem, you can use a hyperlink to the dashboard, and pass a simple flash variable to indicate it has been opened in the desktop mode and the openDoc url should be used, not the sapbi, and vise versa. There are other advantages to using a heprlink to the dashboard such as passing the host information to prevent the need to change that in the xlf on migrations, as well as specifying the dimensions of the swf object, something that cannot be done when opening the swf object directly from the BI Launchpad. Here are the steps to configure a single URL button to work for both desktop and mobile

1. First, you will need to craft both links. You can use the share functionality in the mobile app to generate the link for you and the BI Launchpad to generate the openDoc link. Your links will look something like this :
For mobile: sapbi://OpenDoc/?authType=secEnterprise&default=yes&connection_name=myconn&server_url=
For desktop: AVX4JiBBBoFPk123456&sIDType=CUID&sReportName=Sales%20Summary

2. In the dashboard, add your flash variables. You may use a host info variable to pass info like the host name, port and protocol to avoid having to change those in the XLF as you migrate it between environments. For the purpose of determining the dashboard has been opened in desktop, I added a flash variable called OpenedInDesktop

3. In the model create a formula that evaluates the value of the OpenedInDesktop flash var. If it is set as expected from the desktop, set the formula value to be the desktop link, otherwise, make it the mobi link

4. Add the URL button top the dashboard and bind it’s URL property to formula created to evaluate which link to use based on the flash var

6. Export the dashboard to HTML format on your computer and edit the HTML file produced by dashboard as follow: add your server host info for the value of the HostInfo flash var and add YES as the value of the OpenedInDesktop flash var. Note there are two places to set each value as highlighted in the image below. Place this file on your BO server web server. Note the location, you will create a hypelink for it. For example, you can place it in Tomcat root directory

7. Finally, configure a hyperlink object in the BI Launchpad to the dashboard and place that object in the folder where users will be looking for the dashboard link

And that is it! When users open the dashboard from the BI Launchpad on their desktop, the link will work as openDoc, and when they launch the dashboard from the mobile app, the link will work with as sapbi

Posted in BusinessObjects 4.0, SAP Mobile BI, Xcelsius | Tagged | 3 Comments

Tableau On BO – Setting up live connection from SAP Webi report to Tableau

I recently received several questions about connecting Tableau to SAP BI (Business Objects) as a data source. Tableau already has direct connectivity to HANA, however many users out there are looking for ways to connect their existing webi (web intelligence) reports to Tableau. So, I set to look for some solutions to this problem and was able to develop two good solutions that are automated and allow users to leverage the investment they already made in setting up webi reports through Tableau in an automated fashion.
I started out by examining the data connections available in Tableau and looking for possible candidates I can use to connect to SAP BI and webi. Two options caught my attention immediately as viable connectivity options, since they are relatively open: the “Other Databases (ODBC)” connection and the “OData” connection.
My first course of investigation was around the ODBC option. I know that Crystal Reports for example has an ODBC driver that can connect to an XML feed. So, in theory, I should be able to create an XML feed off a BI service published from a webi report, and connect using this driver.
Trying to use the Crystal Reports driver was a bust. Technically, this would have worked, but the driver, produced by DataDirect is licensed for use with Crystal only and trying to use it with Tableau (or any other client) produce an error message stating the driver can only be used for Crystal. If you happen to have access to a DataDirect or other XML ODBC driver, you can convert the BI Service SOAP response into an XML feed (see jsp code below), and use that to build a DSN you can connect Tableau to. In my searches, I was only able to find commercially licensed XML ODBC drivers, so my second approach, writing my own OData producer became more relevant.
OData is a relatively new, but very popular, internet data exchange protocol and it defines ways to send and request information from a service. There are several implementations for it, and I ended up choosing the java odata4j framework for my experiment.
I started out by setting up the odata4j project in Eclipse and getting two examples I was interested in working: the XmlDataProducerExample which describes how to read an XML feed and expose it as an OData producer, and the ExampleProducerFactory example which demonstrates how to expose the OData producer in Tomcat. I ended up using the Jersey based XML example in my working prototype, but I would most certainly look to host this in Tomcat directly in a real world situation.
So, using the odata4j examples, I created a java application and added a class to read my own XML feed instead of the provided one. I also removed the portions of the OData framework that would allow users to interact with the feed in terms of making changes to the data (not needed in our scenario, which is reporting only). So, my main class connecting the XML feed looks like so:
package org.odata4j.examples.producer;
import static org.odata4j.examples.JaxRsImplementation.JERSEY;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
import org.odata4j.core.OEntities;
import org.odata4j.core.OEntity;
import org.odata4j.core.OEntityId;
import org.odata4j.core.OEntityKey;
import org.odata4j.core.OExtension;
import org.odata4j.core.OFunctionParameter;
import org.odata4j.core.OProperties;
import org.odata4j.core.OProperty;
import org.odata4j.core.Throwables;
import org.odata4j.edm.EdmDataServices;
import org.odata4j.edm.EdmEntityContainer;
import org.odata4j.edm.EdmEntitySet;
import org.odata4j.edm.EdmEntityType;
import org.odata4j.edm.EdmFunctionImport;
import org.odata4j.edm.EdmProperty;
import org.odata4j.edm.EdmSchema;
import org.odata4j.edm.EdmSimpleType;
import org.odata4j.examples.AbstractExample;
import org.odata4j.examples.ODataServerFactory;
import org.odata4j.examples.producer.jpa.northwind.Customers;
import org.odata4j.exceptions.NotImplementedException;
import org.odata4j.producer.BaseResponse;
import org.odata4j.producer.CountResponse;
import org.odata4j.producer.EntitiesResponse;
import org.odata4j.producer.EntityIdResponse;
import org.odata4j.producer.EntityQueryInfo;
import org.odata4j.producer.EntityResponse;
import org.odata4j.producer.ODataProducer;
import org.odata4j.producer.QueryInfo;
import org.odata4j.producer.Responses;
import org.odata4j.producer.edm.MetadataProducer;
import org.odata4j.producer.resources.DefaultODataProducerProvider;
* This example shows how to expose xml data as an atom feed.
public class XmlDataProducerExampleRon2 extends AbstractExample {
public static final String endpointUri = "http://localhost:8010/XmlDataProducerExampleRon2.svc";
public static void main(String[] args) {
XmlDataProducerExampleRon2 example = new XmlDataProducerExampleRon2();;
private void run(String[] args) {
System.out.println("Please direct your browser to " + endpointUri + "/Customers");
// register the producer as the static instance, then launch the http server
DefaultODataProducerProvider.setInstance(new XmlDataProducer());
new ODataServerFactory(JERSEY).hostODataServer(endpointUri);
public class CustomersList {
Customers[] customers;
* Sample ODataProducer for providing xml data as an atom feed.
public class XmlDataProducer implements ODataProducer {
private final EdmDataServices metadata;
private XMLInputFactory xmlInputFactory;
public XmlDataProducer() {
// build the metadata here hardcoded as example
// one would probably generate it from xsd schema or something else
String namespace = "XmlExample";
Listproperties = new ArrayList();
ListentityTypes = new ArrayList();
EdmEntityType.Builder type = EdmEntityType.newBuilder().setNamespace(namespace).setName("Customers").addKeys("recordID").addProperties(properties);
ListentitySets = new ArrayList();
EdmEntityContainer.Builder container = EdmEntityContainer.newBuilder().setName(namespace + "Entities").setIsDefault(true).addEntitySets(entitySets);
EdmSchema.Builder modelSchema = EdmSchema.newBuilder().setNamespace(namespace + "Model").addEntityTypes(entityTypes);
EdmSchema.Builder containerSchema = EdmSchema.newBuilder().setNamespace(namespace + "Container").addEntityContainers(container);
metadata = EdmDataServices.newBuilder().addSchemas(containerSchema, modelSchema).build();
xmlInputFactory = XMLInputFactory.newInstance();
public EdmDataServices getMetadata() {
return this.metadata;
* Returns OEntities build from xml data. In the real world the xml data
* could be filtered using the provided queryInfo.filter.
* The real implementation should also respect
* and queryInfo.skip.
public EntitiesResponse getEntities(String entitySetName, QueryInfo queryInfo) {
EdmEntitySet ees = getMetadata().getEdmEntitySet(entitySetName);
URL url =null;
URLConnection urlConnection = null;
InputStream is=null;
try {
url = new URL("");
urlConnection = url.openConnection();
is = new BufferedInputStream(urlConnection.getInputStream());
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
} catch (IOException e) {
// TODO Auto-generated catch block
XMLEventReader reader = null;
try {
// transform the xml to OEntities with OProperties.
// links are omitted for simplicity
reader = xmlInputFactory.createXMLEventReader(is);
Listentities = new ArrayList();
List> properties = new ArrayList>();
boolean inCustomer = false;
String id = null;
String data = null;
while (reader.hasNext()) {
XMLEvent event = reader.nextEvent();
if (event.isStartElement()) {
if ("customers".equals(event.asStartElement().getName().getLocalPart())) {
inCustomer = true;
} else if (event.isEndElement()) {
String name = event.asEndElement().getName().getLocalPart();
if ("customers".equals(name)) {
entities.add(OEntities.create(ees, OEntityKey.create(id), properties, null));
properties = new ArrayList>();
inCustomer = false;
} else if (inCustomer) {
if ("recordID".equals(name)) {
id = data;
properties.add(OProperties.string(name, data));
} else if (event.isCharacters()) {
data = event.asCharacters().getData();
return Responses.entities(entities, ees, null, null);
} catch (XMLStreamException ex) {
throw Throwables.propagate(ex);
} finally {
try {
if (reader != null) reader.close();
} catch (XMLStreamException ignore) {}
try {
} catch (IOException ignore) {}
public CountResponse getEntitiesCount(String entitySetName, QueryInfo queryInfo) {
throw new NotImplementedException();
public EntitiesResponse getNavProperty(String entitySetName, OEntityKey entityKey, String navProp, QueryInfo queryInfo) {
throw new NotImplementedException();
public CountResponse getNavPropertyCount(String entitySetName, OEntityKey entityKey, String navProp, QueryInfo queryInfo) {
throw new NotImplementedException();
public void close() {}
public EntityResponse createEntity(String entitySetName, OEntity entity) {
throw new NotImplementedException();
public EntityResponse createEntity(String entitySetName, OEntityKey entityKey, String navProp, OEntity entity) {
throw new NotImplementedException();
public void deleteEntity(String entitySetName, OEntityKey entityKey) {
throw new NotImplementedException();
public void mergeEntity(String entitySetName, OEntity entity) {
throw new NotImplementedException();
public void updateEntity(String entitySetName, OEntity entity) {
throw new NotImplementedException();
public EntityResponse getEntity(String entitySetName, OEntityKey entityKey, EntityQueryInfo queryInfo) {
throw new NotImplementedException();
public EntityIdResponse getLinks(OEntityId sourceEntity, String targetNavProp) {
throw new NotImplementedException();
public void createLink(OEntityId sourceEntity, String targetNavProp, OEntityId targetEntity) {
throw new NotImplementedException();
public void updateLink(OEntityId sourceEntity, String targetNavProp, OEntityKey oldTargetEntityKey, OEntityId newTargetEntity) {
throw new NotImplementedException();
public void deleteLink(OEntityId sourceEntity, String targetNavProp, OEntityKey targetEntityKey) {
throw new NotImplementedException();
public BaseResponse callFunction(EdmFunctionImport name, Map params, QueryInfo queryInfo) {
throw new NotImplementedException();
public MetadataProducer getMetadataProducer() {
return null;
public > TExtension findExtension(Class clazz) {
return null;

Notice the line url = new URL(“”);

This is the .jsp (can be a servlet, or our Bogoboards Gateway) that converts a BI Service into a simple XML feed. This .jsp looks like so:

<%@ page language="java" contentType="text/xml; charset=UTF-8" pageEncoding="UTF-8"%>
<%@ page import="com.crystaldecisions.sdk.framework.IEnterpriseSession,
com.crystaldecisions.sdk.exception.SDKException, com.crystaldecisions.sdk.framework.ISessionMgr,, com.businessobjects.bcm.*,tableau_source_rk_pkg.*,
javax.xml.rpc.Service,, java.util.*, java.text.NumberFormat,
java.math.RoundingMode" %>
<%@ page trimDirectiveWhitespaces="true"%>
//in this example, hard coded BO account, in real life will integrate auth or obtain from form
String username = "myusername";
String password = "mypwd";
String token = "";
//Authenticate user and get BO session
IEnterpriseSession enterpriseSession = CrystalEnterprise.getSessionMgr().logon(username, password, "cmsname:6400", "secEnterprise");
String serSess = enterpriseSession.getSerializedSession();
ILogonTokenMgr tokenMgr = enterpriseSession.getLogonTokenMgr();
token = tokenMgr.getDefaultToken();
catch(Exception e){
System.out.println(new java.util.Date()+": error on .jsp: "+e);
String str = "";
String rowStr = null;
NumberFormat formatter = NumberFormat.getCurrencyInstance(java.util.Locale.US);
URL endpoint = new URL("");
BIServicesSoapStub stub = new BIServicesSoapStub(endpoint,null);
GetReportBlock_tableau_source_rk parameters = newGetReportBlock_tableau_source_rk();
//In this example, reading report cache data, can set setRefresh to true to hit the DB each time
QaaWSHeader request_header = new QaaWSHeader();
//Call the soap service and get the response
GetReportBlock_tableau_source_rkResponse res = stub.getReportBlock_tableau_source_rk(parameters, request_header);
//Iterate through the response and parse it out to xml format
//In this example, hard coded parsing, can use the header to parse out dynamically
java.lang.Object[][] table = res.getTable();
for (int i=0;i<table.length;i++) {
for (int x=0;x<table[i].length;x++) {
if(x==table[i].length-1) {
rowStr = rowStr+"<customers><state>"+(String)table[i][1]+"</state><lines>"+
(Double)table[i][5]+"</revenue><margin>"+(Double)table[i][6]+"</margin><recordID>"+(Double) table[i][0]+"</recordID></customers>";}
catch(Exception e) {
System.out.println(new java.util.Date()+": Error in get and display data: " + e);
str = "<?xml version='1.0' encoding='utf-8'?><customersList>"+
out.println(str); //output the xml

These are the two main components of the solution. The rest is more setup, less code. So now, my java app calls a BI Service that is being converted to XML and exposes it as an OData producer, so ANY client that understands OData can read it. If you like to try out this OData service, you can point your Tableau to:

The OData producer for this example is exposed through a proxy on Tomcat. The data might look familiar, it’s a webi report based on eFashion universe, limited to a couple of states and one line of clothing, to keep things light and simple for this example…

So, armed with this URL, I was now able to connect Tableau to my report based OData producer, make changes in webi to modify the data, and refresh the Tableau dashboard to automatically update the dashboard! Here are some screen shots of the process I used to test out the live refresh functionality:

1. Connect to the Odata Producer from Tableau


2. Data from OData producer via webi is available in Tableau to work with now

3. Setup a simple chart

4. Now, go to the webi report that is exposed via the OData producer

5. Open the webi report

6. Modify the query to add more data

7. Save the modified webi

8. Now, switch back to Tableau and refresh the OData producer data source

9. Observe the new data flows through



Posted in BI At Large, BusinessObjects 4.0, Data visualization, Web Intelligence | Tagged , , , , | 10 Comments

CONNECTED SAP Dashboard as HTML5 – Step-By-Step Info

As promised on the last post, you can find below a detailed account of how to get your connected HTML5 harvested SAP Dashboards (Xcelsius) files to work outside the SAP BI App.
I tried to strike a fine balance between keeping this to a simple set of instructions and giving enough info for troubleshooting and explanations.
At a very high level the process is as follow:
1. Harvest the HTML5 source files
2. Place them in a webapp
3. Obtain a BO token (as in any other connectivity scenario with BO)
4. Make a few small modifications to the HTML5 files to leverage the token and pass it through to the REST services being used to invoke the connections in the dashboard
So, let’s get started!
Step 0:
Copy the temp files while preview to mobile. I am not going to elaborate on this step, I assume by now you know how to find the dashboard html5 files

Step 1:
Paste the HTML5 source files from the temp directory in a webapp. In my example, I created a webapp on the BO server tomcat called dashboardunwiredrepeat. This webapp has all the jars needed in the WEB-INF directory to use the BO SDK to authenticate a user and obtain a token. To make it easy to deploy multiple dashboards in a single apps, you can make subfolders for each dashboard as dash1, and dash2 in this example

Step 2:
Prepare to use the BO token. In my example, I used a .jsp file to authenticate with a hard coded username and pwd against the BO system, obtain a token and store it in the tomcat session. To leverage this token, I simply renamed dashboard.html to dashboard.jsp, making it possible to include my .jsp file inside the main dashboard page to use the token. Of course, there are many different ways to accomplish this, and pass the token to the dashboard.html file without converting it to a .jsp with server side code

Step 3:
Get access to the BO token. As I explained in the prior step, my inc.jsp file contains SDK code to obtain a token and stire it in the session. I will include it in the dashboard.jsp file so I can easily access the token. Open dashboard.jsp with a text editor and make the following changes:
a. Paste the following line as the first line in the file:

<%@ include file="../inc.jsp" %>

b. Paste the following code immediately after the line

"<script type="text/javascript">"

in the


Section of the file (rename the url to be your server url as needed):


Begin Custom code added to Dashboards generated code


var mySession = '';

$(window).load(function() {


type: "POST",

url: "http://[host:port]/dswsbobje/services/Session",

data: "{\"loginWithToken\":{\"@xmlns\":{\"$\":\"\"},\"loginToken\":{\"$\":\"<%= session.getAttribute("BOTOKEN")%>\"},\"locale\":{\"$\":\"\"},\"timeZone\":{\"$\":\"\"}}}",

contentType: "application/json; charset=utf-8",

dataType: "json",

beforeSend: function (xhr) {

xhr.setRequestHeader('SOAPAction', '');


success: function(msg) {

mySession = msg['ns:loginWithTokenResponse']['ns:SessionInfo']['@SerializedSession'];


error: function (errormessage) {

$('#msgid').html("oops got an error in first service call!");



, async: false



End Custom code added to Dashboards generated code


Save and close the dashboard.jsp file

Step 4:

Locate the file file_1.js and open it in a text editor. Copy the file contents and paste it in the site the beatify button and paste the formatted text back into the file_1.js file.  Find the line

“this._ceSerializedSession = this._connectionAPI.getInitParameter(l.PARAM_CE_SERIALIZED_SESSION);”

and comment it out by typing two backslashes in front of it like so:

// this._ceSerializedSession = this._connectionAPI.getInitParameter(l.PARAM_CE_SERIALIZED_SESSION);

Then paste the following line under the commented out line:

this._ceSerializedSession = mySession;

Save the file and close it

Step 5:

Locate the file file_2.js (next to file_1.js), open it with a text editor, and format it as described in step 6 using the web site. Find the following block of code:

} else {

u.soapAction = p.RUN_QUERY;

u.request = this._generateRunQuery();


comment out the three lines like so

} else {

//            u.soapAction = p.RUN_QUERY;

//            u.request = this._generateRunQuery();

//            u.response.responseRoot("x:runQueryResult")

And paste the following three lines beneath the lines you just commented out:

u.soapAction = p.RUN_QUERY_SPEC;

u.request = this._generateRunQuerySpec();



That’s it! Take your connected dashboard for a spin at http://yourserver:yourport/yourwebapp/yoursubdir(dash2inthisexample)/dashboard.jsp

Posted in BusinessObjects 4.0, HTML5, SAP Mobile BI, Xcelsius | Tagged | 41 Comments

How to publish CONNECTED SAP Dashboard (Xcelsius) as HTML5 OUTSIDE the mobile app!

This is a fully functioning connected HTML5 version of an SAP dashboard that connects to the eFashion universe on State change and refreshes the data. Yes. You are not reading this wrong. No plugins used, no third party tools or tricks, just SAP dashboard XLF file. Read on to learn more…

Ever since I read Joseph Warbington SCN post about finding SAP Dashboard html5 source files when previewing a dashboard for Mobile, I’ve been intrigued with the possibilities this opened up.

So, during the SAPPHIRE conference, I managed to find an SAP employee on the show floor who was a member of the Dashboard product team. “Can you please tell me when will SAP make the option to export dashboards outside the mobile App available?” I asked. “Well, never… there is no such plan and dashboard relies on the platform for certain things, so we will not make such functionality available…”. Hmmm… Really. Well, this conversation left me a bit disappointed (and by the way, I have no idea if this is the official SAP stance, this was just one side conversation, with one person who works at SAP, who may or may not know the entire product road map). So, after I got home, I decided to try and replicate the full dashboard functionality in HTML5 outside the BI app.

As Joseph Warbington describes, harvesting the HTML5 source files is relatively simple, and it all works just fine, except for one important piece. Connections… Since SP5, we can use the Query Browser to embed connectivity in the mobile dashboards, and quite frankly, without connectivity, I don’t really see much enterprise use in the dashboard. And of course, when trying to get the connections to “just work” after copying the HTML5 files from the Temp directory does not work.

So, armed with chrome network debugger, fiddler, a good understanding of the various BO SDKs (the web services and enterprise ones in particular) and with the invaluable help of my colleague, Yevgeniy (Eugene) Tsvetov, we set out to understand how Dashboard invokes connections and what we would need to get the connections to work outside the BI APP. The result is displayed at the top of this post.

The files generated by SAP dashboard already contain all the scaffolding needed for the connections to work, the only thing that is really missing is the enterprise session. So, by adding a few lines of JS to the files generated, we can pass the session using the enterprise and web services SDK. And REST assured, it all works!

This opens up the possibility for Dashboard designers to post their fully connected awesome designs not just to the BI App, but also to any web site, without needing to use flash!!! Enjoy…

Posted in BusinessObjects 4.0, Data visualization, HTML5, SAP Mobile BI, Xcelsius | Tagged , | 14 Comments

Crystal Reports on SAP Mobile BI

It seems like every month SAP is unleashing new functionality for its Mobile BI app. Explorer, Web Intelligence and recently Dashboards have all made their way into the BI app and seem to work better, look nicer and perform faster then on the desktop! The mobile versions are not only slick and easy to use, they are also very easy to deploy to. For webi, all it takes to turn the report to mobile ready is assignment to a category (as is the case for Dashboards and Crystal), Explorer is there by default, and Dashboard can be saved as Mobile when exported to the repository. It’s that simple. Well, almost… While the product does allow developers to deploy content to the BI app very easily, the challenges of the design are still there, with a twist. Data issues, business logic complexities, real estate constraints, functionality gaps, all of the same challenges that make BI content development difficult for any device are applicable, with the additional challenge of new constraints related to mobile device usage and functionality that is “ramping up”. And while Webi, Explorer and Dashboards are “sexy”, dashing and elegant tools, I set out to try the capabilities of good old Crystal Reports on the BI App, and as always when it comes to Crystal, I was not disappointed!

While Crystal on the iPad lacks some of the Explorer and Webi “swooshiness” and feels a bit “boxy”, it certainly provides much more flexibility in design, navigation and layout capabilities. And since images can be used to enhance its look and feel, Crystal can be made to look as modern as any. Unlike in Webi on the mobile BI app where the report design is limited to simplistic blocks that get converted automagically to the stunning iPad design, Crystal reports will render EXACTLY the way you design them on the desktop. So you can layout the screen any which way you like, which can be very important for some design situations. Crystal unlimited data connectivity also makes it a great choice for directly connecting to any data source with ease. So your crystal report on the iPad will connect to anything you need to, from Universes to any RDBMS, web service, and beyond. The group tree functionality is also enables on the iPad and provides slick and easy way to navigate large hierarchies on the iPad, prompts, work as well and drill downs are all there in their interactive glory. You can paginate using the page number icons, or simply swipe left to move to the next page. Nice.

The image I posted is from a Crystal Report I created with sales data and hierarchy, and I hope it provides a good example of what Crystal actually looks like on the iPad.

So, all in all, Crystal can be an important companion for your mobile BI content deployment, and after more than two decades of reign over the enterprise reporting realm can still help address use cases and reporting scenarios that other more modern tools cannot.

Posted in BusinessObjects 4.0, SAP Mobile BI | Tagged , | Leave a comment

BI Happiness with html5 charts animation

The other day, my colleague Rob Blackburn wrote a really cool and elegant function to animate html5 charts for our dashboards. It was so cool, that it even made my scatter charts smile…

Posted in Data visualization, HTML5 | Tagged | Leave a comment

How I loaded my blog into HANA (and what I learned about it once it was there…)

Unstructured data analysis is one of the most interesting aspects of “big data”. It’s certainly impressive to be able to process massive amounts of structured data in no time, but analyzing unstructured data opens completely new possibilities, that can lead to the creation of whole new disciplines or industries. To test out HANA text analysis capabilities, I thought I would try to load my blog into a column table, and see what it can do.
Leveraging my company AWS HANA instance, I started out by making a simple single column table. The important thing to note here is that for text analytics to work, the data type to be used has to be NCLOB. BLOB for example, will not work.
So, in HANA studio, after connecting to the HANA instance and my schema, I executed:

--1. Create an empty table with NCLOB column to store blog content

The next step was a bit more interesting. How do I actually load my blog into the table…? Well. First, I had to get my blog out to a file. Since I use wordpress, that was as simple as selecting the Export option from the Tools menu of the administration section.

With my blog exported as an xml file, I set my sights on loading it into my table. Data Services would be my typical choice, as it’s fast, easy to use, and has great integration with HANA. However, to keep my options open, I looked for a programmatic solution that will allow more robust capabilities. And as it turns out, the solution was similar to loading a blob object into any other database. I ended up writing a small java program to load the file in.
To connect to HANA in java, I needed to find the ngdbc.jar library and add it to my project build path in Eclipse. The rest was pretty standard:

import java.sql.*;
public class HanaConn {
 public static void main(String args[]) {
  try {
   File f = new File("C:\\FOLDER\\FILE.xml");
   InputStream is = new FileInputStream(f);
   java.sql.Connection conn = DriverManager.getConnection(
     "jdbc:odbc:imdbhdb", "SYSTEM", "PWD");
   PreparedStatement stmt = conn
     .prepareStatement("INSERT INTO RONKELER.BLOG_TEXT VALUES(?)");
   stmt.setBinaryStream(1, is, (int) f.length());
   System.out.println("Done inserting!");
  } catch (Exception e) {
   System.out.println("Exception occured: " + e.getMessage());

So, step 2

--2. Run java program to load blog content

Next, I modified my table to add a primary key. Using the text analytics requires the analyzed table have a PK:

--3. Add column to be used as PK
alter table RONKELER.BLOG_TEXT add (k int);
--4. Populate PK value
update RONKELER.BLOG_TEXT set k = 1;
--5. Add PK constraint
alter table RONKELER.BLOG_TEXT add constraint pkconst primary key (k);

So far, things have been pretty standard. The cool part was turning on the text analytics. Using one simple SQL command, HANA processed the content of my text column, and parsed it out in nano seconds!

--6. Create fulltext index on blog content

This query generated a table called $TA_BLOG_CONTENT_IDX. This table included a row for each word in my blog, allowing me to then run some queries to perform analysis on the content of my blog..:

-- Analysis...
--1. How many words/unique words?
select count(*) from RONKELER."$TA_BLOG_CONTENT_IDX"; --342174 words! Wow, who knew i wrote so much...
select count(distinct upper(ta_token)) from RONKELER."$TA_BLOG_CONTENT_IDX"; --7781 unique words... Maybe i need to read more to expand my vocabulary..
--Longest word? How many times used?
select max(length(ta_token)),
(select ta_token from RONKELER."$TA_BLOG_CONTENT_IDX" where length(ta_token) = (select max(length(ta_token))from RONKELER."$TA_BLOG_CONTENT_IDX"))
from RONKELER."$TA_BLOG_CONTENT_IDX"; --54;VbZDUzY2M2ZDQtYzNmMC00OTJjLTlhMDUtNDU3MGMyY2ZkOWZm&amp well, not really a word, but you get the idea
--Most used words
select upper(ta_token),
where length(upper(ta_token)) > 3
group by upper(ta_token)
order by count(upper(ta_token)) desc; -- Well, need to do some more with this, but Xcelsius and Webi were pretty high up on the list

Of course, this is a tiny example, but the ability to store and quick and easily parse text can be an important feature in any HANA implementation. From social media content to corporate documents, this is a game changer!

Posted in SAP HANA | Tagged , , , | Leave a comment