The last post here was about leaving Mozilla, and I mentioned a tiny bit about where I was going and what I’ll be working on. Then there was radio silence for a few months. So, what have I been up to?
First off, we launched WebGL 1.0 at GDC 2011. I was serving as chair at the time, though after the launch Ken Russell at Google took over as I was leaving Mozilla. I haven’t been as involved in WebGL (or Mozilla at all, really) over the past few months as I’d like to be, because of a number of new job things and moving. I hope to get back into it as things settle down more, though I’ve been saying that for the past month. Soon!
I spent the first two months in Australia, at the headquarters of DownUnder Geosolutions in happening exotic vibrant sunny Perth. There was a lot of crash-course geophysics instruction going on. I basically inhaled 3-D Seismic Interpretation, and asked lots of dumb questions. I can now reasonably hold a conversation about 2-D, 3-D, and 4-D data sets; TWT vs. TVDSS (not to be confused with TWSS); stacking; horizons; faults; wells and well logs; etc. (At least, I can fake knowing what I’m talking about, which is often all that matters.)
The app itself, DUG Insight is really all about data visualization, and UI to get at facets of the data in a reasonable way. We’re nowhere near where we want to be with the UI, but in many ways we’re already light years ahead of the competition — which often looks something like this.
One of the first things I worked on was adding some 3D visualization to well log data. This is basically data gathered along a well bore by instruments that are either part of the drilling package or are otherwise sent down. It’s often the only “truth” data that you have that’ll tell you exactly what’s down there. When showing this data in 3D, it’s nice to be able to vary the thickness of the cylinder based on the data, to give another visual cue along with an applied colourbar. The result:
There are three wells visible here along with their logs, and some seismic data that I made translucent for the screenshot. The tricky thing here is that the data is quite high frequency, and is often very finely sampled. Very small spikes or troughs can be significant, but normal data minification often just smooths them away. We don’t have this solved in the 3D view yet, but in another view I did some work to attempt to show at least the local maxima that contribute to each pixel:
The right is a zoomed in version of the left data. On the right, we can represent all the data accurately, since we have more pixels available than we have actual samples. On the left, though, more than one data sample contributes to each vertical pixel. The gray bars indicate maximum values for all data points that contribute to each pixel. The difference can be pretty big; here’s a side-by-side render of the same data, one with the gray max values and one without:
There are other options here, and we’re still working on figuring out how to expose them to the user (for example, drawing a line down the average value and drawing a bar from min to max).
Other stuff I’ve worked on so far has been much less visual. The codebase is somewhat old, and was often written for correctness and/or purity, and less so for performance and ease of use. For example, we interpolate lots of data; our current interpolators tend to go through a number of function calls *per sample* and do no caching even though we’ll often interpolate multiple samples between the same two adjacent data lines. This is on the list of things to correct, because, well, babytown frolics. There will also be a lot more 3D visualization work done as soon as our next release ships.
In other news, I also drove cross-country from the San Francisco Bay Area to Toronto, where I’m now living. It should have been a lot more fun, but the various camping/hiking I had planned along the way got cancelled because of bad weather… so I just drove straight through, taking about 4.5 days to do the drive. The Cross-country Road Trip playlist on rdio that friends helped me put together was pretty awesome… listened to it straight through, probably ended the drive with a few hours to go. There’s some great music there, along with a few questionable choices which made me laugh during the trip (rick rolled driving into Colorado; the Oregon State Song came up at some point; etc.).
Last but not least, DUG is hiring in Toronto — if you have some data viz, UI, or rockstar Java chops, send me an email (vladimir at pobox dot com). We have just about every interesting software engineering problem, so there’s a lot of good challenges to tackle. The Toronto team is currently small (just three of us), in a great brick-and-beam building near Spadina and Richmond; it’s a pretty fun environment.