If you’re not failing, you’re not trying

Posted on Nov 20, 2013 in Computational Media

… or it could be your time management and unrealistic expectations.

A 50% final (between schematic sketching submissions?)

I have fallen prey to the latter two, which is alas, to be expected. So, while I managed to get interactions, averages, and dashboards going for my student this week, I’ve barely wrapped my head around a more efficient method of sankey diagram construction.
flows-01532

Right now, I’ve managed to create the array of individual nodes, culling and calculating them from the initial csv table. (Just a bit of mouse play above.) It’s a far cry form my revised expectations for getting the lines into an interactive form and/or doing any design work. Current Code is here. And the massive condensation failure is here.

The real sticking point this week has been:  a)  a day verifying the historical population statistics, exposure levels, spatial distribution and adding another iteration in time, and b) trying with no luck so ever to condense the class ‘NodePPeople’ (or people placement for short) and its iterations through the collected array of nodes from the Node class, for examining connections and determining spatial placement. While I have every intention of finishing out the sketch, I do think that I need to revise or rethinking how some of the underlying classes and table sorting maneuvers are nested.

So the remainder of this post has two tasks:

  • First, I want to detail why I’m interested in the sankey format, very quickly.
  • Second, I want to layout, for myself and whomever may read this, both the conceptual and tactical approaches I’ve taken in the last week on trying to crack my code-blocks. In part this is to help with next steps, but I’d also link to think through why something like a sankey diagram should be either easy or hard given the semesters tools.

On a semi positive note, it now looks like a very orderly, but dull stacked bar graph in contrast with last week’s iteration:

non-arranged

Of course, what is most startling is the difference in data:

Calculation failures:

bad-1

[why, why won’t you iterative through all the options?]

decent

Finally, with pedestrian cobbling and conditionals- brand-spanking new tables of parameters. Only 50% a fail, because really, what’s the point of building unique application if you can’t send every other 3 column cvs file through this table?

 

1) Why flow diagrams, sankey structures or flow maps?

 

Maybe it’s the infrastructural interests or the fascination with ‘quantitative revolution’ era models of planning, ecology, and what not, but I’ve been trying to figure out (especially when debugging), why I’d chose to work on data sorting and cumulative, calculation heavy diagrams when I should, honestly, be grabbing vector or autonomous agent formulas from the nature of code. Part of it is clearly, that the more unified a system I attempt to model, the more I have to reverse engineer whatever historical statistics. Perhaps it’s utterly beside the point that I now know that, according to Herman Kahn-esque calculations, nearly 80% of Ohio was more or less anticipated to have radiation sickness in a month, or even that I know exposure thresholds. Or that, in gathering field and fallout data, I’ve got a good handle on dressed animal weights, processing waste, and feed efficiency ratios. It’d be lovely to get more of that out, into the computer, in the next few days (once I rework the structure). I’m not a coding wiz and, clearly, I give myself far too much latitude in terms of anti-aesthetic appearances compared to what I’d be doing for exhibit or landscape work. I think I choose complex structures because, well, I think I’m still interest in computation as a messy cultural artifact as much as a discrete, self-reflexive instrument. Clearly there’s a space between those poles to occupy. . .

2) DeBugs?

files

I’ll spare you reading a ton of code (a simplified working version and my original intent (as fail!) are attached above. Somehow, I get entirely sucked into working, reworking, tweeking, moving around brackets, and just generally letting days slip by while trying to confirm correct table additions. There’s something about the really odd uphill battle to have a math-heavy program, something without an noticeable payoff for the first 80% of code. Maybe the it’s the infographics or maybe it’s reading all those succinct bits of Shiffman/Reas code, I have the impression that my compilation efforts are incredibly inefficient. Not that I seem to be producing much.

This is to be partially expected, but I’d actually structured this little chunk of research to be a bit more streamlined. I found the d3,js snakey plugin, read 4 or 5 different code and pseudo code outlines of the diagram structure and attempted to start mining atop that more or less decently researched notions of the required components. I evening did a weekend of javascript coding to force myself to actually read the plug-in, but I think, honestly, I should’ve taken far fewer leaps from that initial structure. I’m fine with the traditional arts process of re-making to break and explore. I should’ve stuck with a very tightly constrained translation exercise instead of moving quickly into my own psuedo code.

In addition, perhaps the problem is a bit of binge coding. From dipping into Fry in September to November’s finals and course prep, I cram a couple of days a week trying to build something but rarely in tandem with taking advantage of the larger social infrastructure of itp to discuss/explore/debug. Either way, I tend to work as though huge time-blocks and complicated structrues are necessary, where as sometime, like this last week (in sheer frustration) I was forced to do a very, very simple build. Even now, I’ve just accepted how the structure unfolds. Once something odd, like the split array skeleton is in place, it’s easier, if still messy, to append another table here or there, to grab variables. My next step, even as I know I’m going to do a full tear-down in the long run, is to take advance of the where the interactive text labels are defined to write an additional series of object tables that forms the links. It wasn’t the plan, but it makes perfect sense given that they’ve inherited layered spatial parameters. So  in general order:

1) finish out the links and test their interactive potential for the current build- no extra data, no gratuitous graphics- just the simple structure.

2) reorganization- if this is ever going to work for data in general as opposed to know tables, I have to move away from some of the known ‘arrays’ that were inserted out of frustration. I will be going back to try to automate and nest things for future adaptability. My sense is that if I work with a few simple boolean choices, i.e. alternate loading tables, I’ll be forced to clean and adopt a better structural strategy. I will definitely be sitting down with both my code 1) (above) and the javascript to plan those particular structures.

3) data interaction- once the prototype is actually interactive- it then makes sense to feed it a few more historic data sets and/or contemporary material flow samples. Clearly exercise 1 and 2 have priority as well as potential to be completed in the next two weeks. So it’s rather indifferent for ITP scheduling my ultimate ends.