Audit Tableau Filters Application With the JS API

While Tableau Server provides good auditing capabilities in terms of logins and dashboard views, in some cases, you need very detailed and granular auditing of certain dashboards. Like other websites, you may need to understand exactly which features of a dashboard are used by understanding how users interact with a dashboard. Which filters they are using and how, what navigation paths are taken, and what selections are being made by different users. If you do need to get to this level of auditing detail, you can use the Tableau javascript API which provides programmatic means to capture all the interactions on a viz. As a proof of concept, I went ahead and created this demonstration. The proof of concepts consists of two dashboards, laid out side by side on a web page. The left dashboard is being audited. It is loaded with my Oscars data as an extract, and contains several filters. You are encouraged to use the filters and click around this dashboard. The dashboard on the right uses a live connection to the audit database and can be refreshed to view live data by clicking the refresh button on top. As you click on filters on the left, you will see them populated on the right…
To accomplish this, I needed to address three areas of development: database backend audit table, server side service to process the audit event and save them in the table, and a frontend javascript component to utilize the Tableau JS API, capture user clicks and pass thenm to the backend service.
I began with the database layer and created a simple auditing schema with some information I wanted to track about it in a database table. The attributes I am tracking are the audited dashboard workbook file name and url, the dashboard name being clicked on, the username of the tableau user evoking the action, the filter field being applied, the field type, whether the filter is being used in include or exclude mode and the filter values being applied.

With a database table in place, I went on to write a RESTful web service that I could connect to from a webpage via javascript and pass info to be saved in the table.
Due to the asynchronous nature of the filtering and auditing sequences, I ended up putting some more login in the service class to be able to parse a JSON string that may contain multiple filter events. Since a single filter event may contains several values being applied, to save on the “chatter” between the dashboard and the server. This is despite the fact that the auditing is happening in the background asynchronously and does not impact the dashboard performance.


The third leg of this audit stool was the javascript needed to collect the user events and pass them to the service. The Tableau JS API provides methods to get filters and selections, as well as events that can be listened to for both operations. This was the hook needed to develop the audit capability. To handle filters, a hidden sheet is included in the dashboard and is “probed” for the filters being applied to it from the dashboard on each filter or select event. Selection events are also captured. In developing this script, I did not use any framework such as AngularJS, or even jQuery, and kept to the basics.


Finally, I created another dashboard connected live to the audit database, to explore the audit events being captured on the first dashboard, and created a webpage to lay them out side by side. One final issue to grapple with was handling the trusted authentication SSO through the API, and was that was sorted out, the demo was complete. Enjoy!

Posted in Tableau | Tagged , , , , | Leave a comment

Tableau Infographic dashboard – it’s not the tool which makes a dashboard cool

Ever since Josh Tapley published the infographic style dashboard, I have been working to reproduce this design in various tools. I used SAP webi and crystal reports to create similar versions in the past and recently produced another version in Tableau. This is an interactive dashboard, and clicking the Change Data button will refresh the dashboard with new random data. Click the image below for the interactive tableau workbook.

This is another example of why proper design in a dashboarding project is just as important (and in some cases more..) as data design. It is also an example that illustrates how Tableau visual capabilities can be stretched from a dashboarding perspective.. Enjoy!


Posted in Data visualization, HTML5, Tableau | Tagged , | Leave a comment

Change the order of execution of connections in Tableau

I recently ran into a situation where I had to change the order of refresh execution in a tableau dashboard that had multiple connections (extracts) that support multiple worksheets/dashboards.

The original order of execution of the connections is determined by their creation order. The connection that gets created first, gets executed first, then the second, etc, when extracts are used. Tableau actually appends a timestamp to the connection name upon creation, and uses that timestamp appended name to execute the extracts in order when it’s time to refresh them.

For example, I created a tableau dashboard with 3 simple extracts, each one connected to a different excel file. The connections were created in order (Conn1, then Conn2, then Conn3).


Now, I would like to change the order in which these connections are refreshed…

My tableau file was saved as ConnOrderOfExec.twb. To view and modify the order in which connections will be refreshed, follow these steps:

1. Make a backup copy of the original .twb file

2. Rename the .twb file to .xml

3. Open the .xml file with a good and simple text editor (notepad is fine, not word please)

4. You will find a datasource tag with a caption attribute which is the nice name you given the connection and and internal name attribute for each connection with would follow the form of databasetype.timestamp


5. To change the order of connections execution on refresh, simply replace the .timestamp with a number that represents the order of your choice! Be sure to use replace all to replace all the occurrences of the timestamp you are looking to change


6. Save the .xml file, close it and then rename it back to .twb

7. Open the .twb file and now the extracts will refresh in the new order you determined!


Posted in Tableau | Tagged | Leave a comment

Custom tabbed navigation in tableau

This example illustrates how a combination of images used as tableau shapes, with some careful positioning and action filters, can be used to create custom navigation effects that will feel natural and intuitive to most users. Here is a description of the process used to produce this tableau dashboard:
First, using power point, 2 images were created, a selected tab and an unselected tab.


These images were added to tableau shapes repository.


Then, in tableau, these images were used to create two sheets for each tab screen, a selected tab state and an unselected tab state.

03_home_selected 04_home_unselected
Then, 4 dashboards were created with the worksheets containing the images carefully positioned on each dashboard

Finally, action filters were created on each page to navigate to the appropriate page on selection of each navigational tab sheet as needed

The dashboard was published to the server and.. Voila! The custom tabs are ready

Posted in Data visualization, Tableau | Tagged , , | 3 Comments

Tableau recalls dashboard using Extract API

tableau recalls dashboard

In this recalls dashboard, I used custom shapes/symbols to represent each hazard and reflected the number of recalls through the size of the symbols. I also used the manufacturing country to create a map of the recalls, were the size of each country bubble is representing the number of recalls (guess which country gets the biggest bubble…). I also added a link to a detail page that has each recall info, and a link to the agency recall web page about the recall. And the sweetest thing, I will not need to touch this dashboard ever again, it refreshes the data from the xml each night automagically.. Be sure to click the image above to visit the live dashboard!

A couple of years ago I published an HTML5 recalls dashboard on this blog ( To create this dashboard, I connected to the Consumer Product Safety Commission web site and converted the xml data it provides into JSON format to be consumed by the HTML5 dashboard. Recently, I ran into the Tableau Extract API and I thought it would probably be interesting to create a Tableau version of this dashboard, in which I could leverage the Extract API to convert the XML feed provided by Consumer Product Safety Commission in Tableau extract format (.tde file). Once that technical hurdle is accomplished, Tableau could be used to create really neat visualizations quickly and easy. What’s more, this illustrates the possibility of connecting Tableau to virtually any data source, currently supported by Tableau or not, since the Extract API can be used to create a .tde file from virtually any data that can be connected to programmatically.

First thing is first, I made sure I followed the Extract API instructions for Eclipse ( Once my environment was setup, and I was able to run the sample MakeOrder app to produce a .tde file, it did not took much adaptation to write my own class to connect to the CPSC website to obtain the recalls in their XML format, iterate through the XML document and convert it to the Table format needed to produce a Tableau Extract file. Below is the source code for that..

package com.cleartelligence.tableau.extract.api.sample;

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.NamedNodeMap;
import org.w3c.dom.NodeList;
import com.tableausoftware.TableauException;
import com.tableausoftware.DataExtract.Collation;
import com.tableausoftware.DataExtract.Extract;
import com.tableausoftware.DataExtract.Row;
import com.tableausoftware.DataExtract.Table;
import com.tableausoftware.DataExtract.TableDefinition;
import com.tableausoftware.DataExtract.Type;
public final class MakeRecalls {
     private static TableDefinition makeTableDefinition() throws TableauException {
        TableDefinition tableDef = new TableDefinition();
        //Create a table definition based on the recall data
        tableDef.addColumn("UPC",       Type.UNICODE_STRING);
        tableDef.addColumn("country_mfg",         Type.UNICODE_STRING);
        tableDef.addColumn("hazard",        Type.UNICODE_STRING);
        tableDef.addColumn("manufacturer",           Type.UNICODE_STRING);
        tableDef.addColumn("prname",        Type.UNICODE_STRING);
        tableDef.addColumn("recDate",           Type.UNICODE_STRING);
        tableDef.addColumn("recallNo", Type.UNICODE_STRING);
        tableDef.addColumn("recallURL", Type.UNICODE_STRING);
        tableDef.addColumn("type", Type.UNICODE_STRING);
        tableDef.addColumn("y2k", Type.UNICODE_STRING);
        return tableDef;    }
     private static void parseRecalls(Table table, String startDate, String endDate) throws Exception    {
           String insertItem = " ";
           TableDefinition tableDef = table.getTableDefinition();
           //URL to obtain recalls XML data - added params for the start and end date so they can be passed in from the main method
        URL url = new URL(""+startDate+"&endDate="+endDate+"&userId=test&password=test");
        URLConnection connection = url.openConnection();
        //After examining the xml structure, determined the level needed, iterate to the XML node level needed
        Document doc = parseXML(connection.getInputStream());
        NodeList first = doc.getChildNodes();
        NodeList second = first.item(0).getChildNodes();
        NodeList third = second.item(1).getChildNodes();
        for(int i=0; i<third.getLength();i++)
           //result rows have recalls data in the xml, parse them into the table
           if (third.item(i).getNodeName()=="result") {
                NamedNodeMap attr = third.item(i).getAttributes();
                System.out.println("Wrote line "+i +" into .tde file");
                Row row = new Row(tableDef);
                for (int x=0; x < attr.getLength(); x++) {
                      if (attr.item(x).getNodeValue().length()<1)
                           insertItem = "NULL";
                      insertItem = attr.item(x).getNodeValue();
                      row.setString(x, insertItem);
private static Document parseXML(InputStream stream)
               throws Exception
                   DocumentBuilderFactory objDocumentBuilderFacto
y = null;
                   DocumentBuilder objDocumentBuilder = null;
                   Document doc = null;
                       objDocumentBuilderFactory = DocumentBuilderFactory.newInstance();
                       objDocumentBuilder = objDocumentBuilderFactory.newDocumentBuilder();
                       doc = objDocumentBuilder.parse(stream);                   }                   catch(Exception ex)   {                       throw ex;                   }                         return doc;
      public static void main(String[] args)
   {                //start and end date args for the recalls data as yyyy-mm-dd, for example, 2014-12-31
           String start = args[0];
           String end = args[1];
           //path to .tde
           Path path = Paths.get("C:\\recallsTDE\\recalls-java.tde");
           try {
                //remove any prior version of the .tde
           } catch (IOException e) {
           try (Extract extract = new Extract("C:\\recallsTDE\\recalls-java.tde")) {
                Table table;
                    TableDefinition tableDef = makeTableDefinition();       
                    table = extract.addTable("Extract", tableDef);
        } catch (Throwable t) {

The next step was a bit more tricky, as it was not documented anywhere I could find. How to actually use the .tde file in a tableau server environment so that dashboards can be created off it and refreshed automatically? After experimenting with multiple methods, I settled on the following approach: run the extract program as a runnable jar on a server and then use the tabcmd API to publish the data source into my tableau server. The .bat file I created to run the runnable jar and export the .tde into the server looks like so..:

cd C:\recallsTDE
java -jar "C:\recallsTDE\recalls-java.jar" 2000-01-01 2020-12-31
tabcmd publish "C:\recallsTDE\recalls-java.tde" -o -s http://[my.tableau.server.ip] -u [myTableAdmin] -p [myTableauAdminPwd] -t [mySite]

My runnable jar, recalls-java.jar, sits on the C drive in a directory called  recallsTDE. It takes the argument 2000-01-01 for the first date to look for recalls from the government web site and 2020-12-31 is the end date.

After the java program finishes running and the .tde is created, it is published to the server with the tabcmd command, which you can lookup as well. Documentation for it is par I guess, but you can kind of make your way through it, it is fairly intuitive.

I put this .bat file into a windows schedule which runs daily and now I have my automatically refreshable tableau server data connection to work with.

The next step was of course to create the dashboard in tableau desktop from the data.

And the final step was to leverage the trusted authentication setup I have with our front facing servers (since tableau server sits behind the firewall) to expose the dashboard via a URL that will bypass the login screen. Here is the code for that step as well:

<%@page session="false"%>
<%@page import="*,*" %>
String user = "[TableauUserWithAccessToDashboard]";
        String wgserver = "[TableauServerAddress – internal]";
        String dst = "views/[workbook]/[view]";
        String params = ":embed=yes&:toolbar=no&:tabs=no&:refresh";
String ticket = getTrustedTicket(wgserver, user, request.getRemoteAddr());
        if ( !ticket.equals("-1") ) {
            response.setHeader("Location", "http://server/trusted/" + ticket + "/t/site/" + dst + "?" + params);
            // handle error
            throw new ServletException("Invalid ticket " + ticket);
<%!private String getTrustedTicket(String wgserver, String user, String remoteAddr)
        throws ServletException
        OutputStreamWriter out = null;
        BufferedReader in = null;
        try {
            // Encode the parameters
            StringBuffer data = new StringBuffer();
            data.append(URLEncoder.encode("username", "UTF-8"));            data.append("=");
            data.append(URLEncoder.encode(user, "UTF-8"));
            data.append(URLEncoder.encode("client_ip", "UTF-8"));
            data.append(URLEncoder.encode(remoteAddr, "UTF-8"));
            data.append(URLEncoder.encode("target_site", "UTF-8"));
            data.append(URLEncoder.encode("Site2", "UTF-8"));
            // Send the request
            URL url = new URL("http://" + wgserver + "/trusted");
            URLConnection conn = url.openConnection();
            out = new OutputStreamWriter(conn.getOutputStream());
             // Read the response
           StringBuffer rsp = new StringBuffer();
            in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
            String line;
            while ( (line = in.readLine()) != null) {
            return rsp.toString();
        } catch (Exception e) {
            throw new ServletException(e);
        finally {
            try {
                if (in != null) in.close();
                if (out != null) out.close();
            catch (IOException e) {}

And there you have it. Enjoy!

Posted in Data visualization, Tableau, Uncategorized | Tagged , , , | 1 Comment

Don’t let the software migration cycle cripple your BI project


Software developers have been taught for decades that they must follow the Development/Integration/Staging/Production practice. This practice was instituted to minimize the number of “bugs” in the software and allow for rigorous testing of code changes without “interfering with production”. This practice was quickly adopted by IT departments and is now the practical standard followed by organizations large and small. While BI straddles the line between technologists and business folks, it is typically controlled by the organization IT/IS/Development department (it requires software and hardware..) and as such, the development of BI content is approached in the same fashion as any traditional software project: we develop in dev system, we integrate in a shared environment with limited datasets, we test and stage in an environment close to prod, and we finally deploy to production. There are of course variations on this theme, but most of you are very familiar with this rant. Unfortunately, this approach has a crippling effect on BI content development, which typically needs to be very rapid, often does not need “accounting accuracy” and is very difficult and expansive to replicate across multiple systems.

First, there’s speed. Burning business questions cannot wait for months, or even weeks, while reports or queries are being migrated through various systems designed to work as a chain of error catching nets. Some margin of error is allowed (and even assumed) when it comes to many business questions (more on this on the next point). To make things even worst for IT departments struggling to keep up with the ever increasing speed of the demand, most BI vendors provide self-service and ad-hoc capabilities that allows savvy business users to design their own reporting content. This increases the frustration of business folks, waiting for weeks and months for changes or development of new content, after they experience the development of ad-hoc content in minutes or hours.

Second, there’s the typical need for directional accuracy as opposed to pennies accounting. For many computer scientists (and accountants), accuracy is an absolute term. In business however, things are usually not as clear cut, and some information is better than none. So, obsessing over each data anomaly and assuring there are no mistakes what-so-ever, is not only impossible when it comes to large amounts of data generated by complex systems (and complex humans), it can lead to stagnation in the process, as “every penny” is being reconciled, and focus is placed on fractional amounts of information, while the millions or billions of correct results are ignored, and left to wait for the next “release date” when all bugs can be fixed.

Finally, with large systems, and large amounts of data, it is virtually impossible (or at least very very expensive) to reproduce the entire production data in any other environment. There is simply too much of it to have copies. This fact intensified by the need for speed, and the ability to “let go” of the penny accounting approach to the data also supports the need to “skip” the migration cycle for BI content development.

Moreover, other than in very rare cases, BI content is based on reading data only. Even interactive BI content (such as dashboards, or interactive reporting) is based on data reading only. Most BI vendors make it almost impossible to change data while using their tools, so the risk of causing any kind of “damage” to systems by writing changes to database is not a problem as well.

There are cases where BI content development can benefit from following the typical software migration practice, but in most cases it suffers. And if internal technology departments want to keep up with their internal demand, they need to adapt to a more rapid approach to BI content development.

Posted in BI At Large | Tagged | Leave a comment

Embedding Tableau Dashboards

I delivered a webcast about embedding tableau dashboard in external facing apps, and embedded BI in general. You can view the webcast recording at the following address, or checkout the slides that accompanied the webcast here.

Posted in BI At Large, Data visualization, Tableau | Tagged , , | Leave a comment

Embedded Tableau Oscars Dashboard


A few months ago I published my Oscars dashboard ( Recently, I decided to produce a new version of it, leveraging Tableau, and extending it with some additional features possible with some html5 integration. Embedding Tableau in an external web application framework is a great way to leverage Tableau terrific data exploration features like drilling, grouping and filtering with some intuitive, simple to understand and use interface suitable for a user’s portal or an executive audience. This example leverages my Oscars database file and allows exploration of Oscar nominated actors, actresses, directors as well as a free form exploration option for perusing more of the data set. Click the image above to interact with the full version. Enjoy..

Posted in Data visualization, HTML5, Tableau | Tagged | Leave a comment

Tableau connected Medicare enrollment dashboard


Our medicare enrollment database continues to grow and now contains over 9M enrollment records from across the country. I began collecting this information almost two years ago with my colleague Josh Tapley, and we used it to produce our medicare advantage dashboards using the SAP Dashboards (Xcelsius) tool, as well as our HTML5 reporting solution. Aside from being an interesting dataset, relevant to medical insurance professionals and anyone else interested in medicare and healthcare, this platform provides us the medium to demonstrate many technical advantages and techniques we often solve on projects. So, to add to our arsenal or medicare advantage dashboards, I have now added a Tableau version. This version looks and operates just like it’s siblings from SAP and our custom HTML solution, however uses completely different technology under the covers. To create it, we had to overcome several interesting challenges, from the ability to serve up Tableau content from our secure server which resides behind our firewall via secure proxy to the internet, addressing proxying, authentication and security challeneges to the ability to create visuals which do not exist natively in the tool, such as a donut chart. This dashboard is connected to the live data, and executed a query each time a state is selected. This design pattern is consistent across all three versions of this dashboard and is designed to demonstrate the ability to work with these dashboarding tools in a completely “hands free” no hassle, and no maintenance mode, where data is refreshed in the database and automatically reflected in the dashboard with no need for any intervention. Enjoy.

Posted in Data visualization, HTML5, Tableau | Tagged , | Leave a comment

Using Webi 4.1 SDK to get Report SQL


When SAP introduced BusinessObjects 4.0, clients who were invested in the Webi REBEAN SDK got worried. The Webi SDK had been mostly deprecated and it was not clear what is going to be the direction for those seeking programmatic ways to interact with webi reports. Well, in 4.1, the Webi SDK has made a big come back, and with a modern, web friendly architecture. Webi reports can now be modified, explored and edited programmatically via RESTFUL calls. In a prior post, I wrote about the ability to logon to the BO enterprise platform via REST web services, and building on this example, I took the 4.1 SP2 SDK for a spin. The use case I wanted to address was the extraction of the SQL query from a webi report. This is a use case I run into often, especially when working on dashboards projects where database developers who have no idea (or interest) in webi are looking to get their hands on the SQL queries being used to generate dashboard screens (via BI services). To accommodate this, I was looking for a way to expose the query from the webi data provider programmatically, via a simple hyperlink in the dashboard.

In this example, the service is implemented as a .jsp file on the BO tomcat server. As explained in the first post, we have to work around the domain-of-origin issue, and a java proxy seemed to be the way to go here, though other approaches are feasible.

The actual code to retrieve the sql query of the webi report is below:

<%@include file="getlogin.jsp" %>
The getlogin.jsp file basically uses the logon code from the first post and saves the generated logon token to the session. That way, the logon token can be used for subsequent operations easily…
<%@ page import="
<%@page contentType="text/json"%>
<%@page trimDirectiveWhitespaces="true" %>
String reportId = request.getParameter("id");// allow passing a report id as a query string from the dashboard
if (reportId==null) reportId = "7749"; //catchall in case no id is passed
StringBuffer sbf = new StringBuffer();
 HttpGet httpRequest = new HttpGet("http://localhost:6405/biprws/raylight/v1/documents/"+reportId+"/dataproviders/DP0");//in this example, there is one data provider for the report, if there are multiple, you will need to explore them and find their names in order to query each one separately
 String logonToken = "";
 logonToken = (String) session.getAttribute("logonToken");
HttpClient httpclient = new DefaultHttpClient();
 HttpResponse httpResponse = httpclient.execute(httpRequest);
if (httpResponse.getStatusLine().getStatusCode() == HttpStatus.SC_OK && httpResponse.getEntity() != null) {
 HttpEntity ent = httpResponse.getEntity();
 BufferedReader in = new BufferedReader(new InputStreamReader(ent.getContent()));
 String inputLine;
 while ( (inputLine = in.readLine()) != null) sbf.append(inputLine);
 } else {
 out.println("error: "+ httpResponse.getStatusLine().getStatusCode());
 Header[] headers = httpResponse.getAllHeaders();
 for (Header header : headers) {
 out.println("Key : " + header.getName() 
 + " ,Value : " + header.getValue());
 HttpEntity entity = httpResponse.getEntity();
 String responseString = EntityUtils.toString(entity, "UTF-8");
String jsonStr = sbf.toString();
Object obj=JSONValue.parse(jsonStr);
JSONObject array = (JSONObject)obj;
JSONObject obj2=(JSONObject)array.get("dataprovider");
Posted in BI At Large, BusinessObjects 4.0, Web Intelligence | Tagged , , | Leave a comment