Comments on: Book review: Interactive Data Visualization for the web by Scott Murray https://blog.scraperwiki.com/2013/05/book-review-interactive-data-visualization-for-the-web-by-scott-murray/ Extract tables from PDFs and scrape the web Thu, 14 Jul 2016 16:12:42 +0000 hourly 1 https://wordpress.org/?v=4.6 By: Ian Hopkinson https://blog.scraperwiki.com/2013/05/book-review-interactive-data-visualization-for-the-web-by-scott-murray/#comment-902 Sun, 26 May 2013 10:56:04 +0000 http://blog.scraperwiki.com/?p=758218656#comment-902 Your comment is pretty much a blog post of its own!

I see the power of R and Matlab being that they have a large range of pre-prepared visualisations which are low effort to apply to your data. Ultimately they are not great at the precise placement of labels and so forth, the serious visualisation people I know all take output from such programs and tidy up using Adobe Illustrator or Inkscape.

Where R and Matlab fall down is they don’t support the “overview first, zoom and filter, then details on demand” visualisation methodology. I see this as the gap that d3 can fill, others have used Processing for this. Tableau is an attempt to make this a user-friendly experience but sacrifices the flexibility of programming environment.

It would appear to be possible to render a d3 generated SVG visualisation to a bitmap on the server-side, so it would be possible to add this to the platform.

]]>
By: Julian Todd https://blog.scraperwiki.com/2013/05/book-review-interactive-data-visualization-for-the-web-by-scott-murray/#comment-901 Fri, 24 May 2013 17:26:38 +0000 http://blog.scraperwiki.com/?p=758218656#comment-901 You’re right to suggest that the best visualization tool is not a lot of good if you can’t program it. And programming is as much a process of exploring the data as it is writing the code to produce a particular visualization that — ideally — you haven’t seen before except in your imagination. You just hope it’s going to look good. And if it doesn’t, you need to explore/reprogram around the visualization until it does look good.

If your exploratory tool, which you have interacted with semi-visually, is able to encode its state (say, in the query string), then you have effectively programmed this visualization.

This is an easier and more exploratory way to program a visualization than by using R or MatLab. When making visualizations using those tools, one iterates the coding and drawing multiple times until it starts to produce an answer that is right. This is indeed a clunky means of interaction and exploration with the data visualization, but that is what it is. The popularity of something like R is that it makes good guesses at the visualization, and reduces the reprogramming iterations it takes. But there is still a trade-off.

For example, your visualization in R might be very smart at placing the label on the Y-axis, while the interactive visualization in Javascript may not be so smart. However, if the Javascript visualization enables you to drag the label to the place you want to put it, then it’s going to take about the same amount of time to get it right. And it will be more flexible in other positionings, such as where to put the key for the graph — usually in the plotting area where it doesn’t overlay an important part of the curves. (ie this is a matter of taste).

Now suppose you had a tool that could automatically render your Javascript visualization into a PNG bitmap, comparable to the static effect that is the output of R or Matlab. Maybe do this through an SVG->PNG converter.

Then, couldn’t you say that creating visualizations the old fashioned hacky way by coding and rerunning to generate a static image is entirely redundant?

That is if the platform can produce static bitmaps to be included in static reports? This is the bridge from one paradigm into the other.

]]>