Getting adventurous…

Up until now I’ve been trying to just get content in to the HoloLens environment: producing 3D models in various guises and importing them in to Unity or throwing them up on to a server for the HoloLens to fetch.

The next stage is telling the world (think big?!) what we’re doing, and that means trying to display what is essentially a 1-person projector, to a whole room of people. Conveniently the HoloLens allows you to stream live what someone is seeing through the device over a WiFi connection. The tiny webcam-like camera sits on the HoloLens and combines an image with the holograms on display. It does a nice enough job for a quick demo, but it’s pretty lo-resolution and grainy so we should think about making something a little more polished.

About a year or so ago, a group of HoloLens develops came up with a way of solving this issue by upgrading the live stream optics by way of jury rigging a HoloLens on top of a DSLR camera. I had a look through the tutorials back then and chalked it off a something fun to do, but well above my competence level at the time. However, these last few months I’ve been chipping away at getting things set up and today… success!

Firstly I took stock of what we had and what was going to be any major expenditures. Thankfully the lab had bought a nice new DSLR (Nikon 5300) that we use for field work so no spending needed there. The lab desktop was lacking in an HDMI-in so we had to splash out on a capture card,  we used the one listed on the Github site which had somewhat mixed reviews, but in hindsight I can say I’ve had no issues getting it up and running. The software SDKs needed were free downloads (woo!), so the final conundrum was going to be how to attach the HoloLens to the DSLR? The tutorial mention an elaborate setup involving both 3D printed and machined parts which I’m sure make for a very secure housing for the expensive HoloLens. Seemed a little over-elaborate for me… to the interwebs! I stumbled across this 3D printable model that I ended up sending to the library at Columbia:

Turns out most universities have a 3D printer lying around and they’ll print things like this for free!

This did the job nicely.

So, all the hardware is sorted, the capture card works well and live-streams through the HDMI-out from the camera to the desktop. Now we need to calibrate things as the software needs to composite images from 2 cameras (the DSLR and the HoloLens) that are in different locations. Calibration time!

After taking about 50 photos like this, you’d look bored too…

This process spits out a text file that provides the transfer information for the images to be combined and overlain so the holograms appear in the right positions.

The final step is incorporating this within an application. I managed to work out how to share holograms a while back, this is a requirement for this approach although there are now multiple ways to share holographic environments now between HoloLenses. I’m going to use my trusty Virtual Earth application that has a hi-res image of the Earth overlain on a sphere and can be shared among a group for class uses. Miraculously, on firing everything up for the first time (well not the first time, as I didn’t read the instructions properly and booted things up in the wrong order…!) it worked! So many more pixels on my holographic image captures:

Before just using the HoloLens’s ‘webcam’
After using the DSLR composite from exactly the same position. The hologram is way more solid-looking and much higher resolution – plus the background is much clearer.
The outcrops look pretty good too: here’s one created from drone imagery

And it even works for movies too:

1080p’s of relief. Only took a few months of getting all these things put together and having the confidence to try it. Next job will be getting holograms to sync up so that the holograms will be much more dynamic on these videos!



1 year in…

The last couple of months have been a little busy. From back to working on some research from the Whillans Ice Stream in Antarctica, to continuing the development of the Fossett Labs applications, it’s been a bit hectic. Throw in a SeismoDome planetarium show (which was a hit and we may have some shows outside of NYC) and there hasn’t been all that much time to write any blogposts!

SeismoDome at the Hayden Planetarium

The annual AGU meeting in New Orleans is upon us and preparation is well underway for presenting a demonstration of the Fossett Lab’s work for the scientific community. Given the expected unreliability of the WiFi at AGU, I’ve been putting together a standalone demo app that should showoff a variety of different topics we’ve been working on: outcrops, atomic structures and earthquake catalogs. We’re hoping that the community will be generous enough to help us grow the content as the HoloLens-side application is pretty much ready for it.

Also in prep for AGU, my GitHub site has gotten some much-needed attention and I’m starting to upload a few of the applications. It’s still pretty rough but the essentials will be up there. I’ve also bought business cards for the first time ever…

This last year has been a lot of learning for me. Designing user experiences (UX) was not really part of my Ph.D. but it has been a lot of fun. It’s also been mostly solo work and a lot of self-learning, so it’s a relief to introduce a couple of undergraduates into the lab in the new year: one working on importing the Rudolph rock collection into a holographic environment, and one working more closely with me the design of Earth Science applications (and hopefully fixing a lot of my poorly-written code!). Bring on 2018!

Sound and structure

Earthquakes make a noise… they’re just so low frequency that we can’t hear them. Our ears are excellent at picking out strange things or patterns within what would look like noise if you displayed the waveforms visually. So can we use sound to uncover more information within our data than by current data analysis methods? This is how the Seismic Sound Lab started.

Certainly sound adds a more immersive feel to any data visualization. This last month, in the run up to Lamont’s Open House, I’ve been working on developing new visuals for the Sound Lab using Unity 3D. This game engine allows for much more dynamic visuals, and the ability to fly inside a subduction zone. It’s been a bit of a challenge to get the visuals right (they’re still not as good as I want them to be!) but we’ve put together a demo for 17 years of Chilean seismicity that begins to show the capabilities of using gaming software for data viz.

Catalog movie for 17 years of Chilean earthquakes. The surface of the Earth is see-through, the circle at the center is the Earth’s outer core.

The 3-dimensional aspect allows us to show off structures illuminated by earthquake locations. We’re also developing visuals for a number of other significant earthquakes: Parkfield, Sumatra, Haiti, and Tohoku. The later we’re also developing 3D ground motion visualizations and sounds utilizing the Transportable Array. The sounds are still in the works but here’s a preview showing where the TA stations in the Midwest of the USA:

Obviously all the visuals will be developed for the planetarium show! The flythroughs of the subduction zones look really good in the dome, which unfortunately I can’t really show you the full effect here on computer screen, but here’s the general idea:

Lots to do still!


Protein (holo) synthesis

3D holographic models are all well and good, but where the technology may provide an edge are with models that are impossible to physically build (3D printing has come a long way, but it has yet to print out objects that can hover!). This became more apparent to me when asked if I could see if we could visualize a protein molecule using the HoloLens.

Now, being a geophysicist by trade, I had no idea where to begin and then what to load up that would be useful! I’m still a little unsure about what will be most beneficial to researchers and educators, but I think the implementation of holographic models in this instance could really help.

There is software that can represent proteins and other biological structures in 3D out there on the market, and I was pointed towards PyMol as being a good place to start. Ok, so I could get a molecule visualized in 3D, could it be rendered on the HoloLens or would it be too much information?

First up, I needed to convert the file format to something Unity could understand. This wasn’t trivial as although PyMol can export COLLADA .dae format, when reading these into Unity the colors (which can be important) are not retained. Shifting to Plan B, I loaded a .wrl exported from PyMol into Blender. This seemed to work, all the colors are retained and when I opened up the .blend file in Unity all the colors are present and correct. I loaded up an application on the HoloLens and took a look-see:

Nice, DNA strings and all! Now that was fine but some things needed tweeking. Firstly, the spheres are probably not the best to visualize the organic chains and some of the elements are overlain by others, so I’ll change them to PyMol’s ‘stick’ representation. Secondly, and slightly more importantly, the model was incredibly juddery, the loss of high frame rate is a symptom of trying to render too much information at once and leads to a lousy experience. Best to try and decimate the mesh a little. Lastly its a static hologram, we can do better than that!

Back in Blender I can see if there’s duplicate vertices that can be deleted. This is often the case when models are being generated by some scientific software that isn’t really optimized for this sort of thing. Yes, there was a whole bunch of duplicates so that will help a lot with reducing the file size and the rendering. While I was in Blender I also sorted out the shader so that the surfaces appeared smooth as opposed to faceted in the video above.

Back in Unity I loaded up the improved model (much faster!) and designed a new manipulation toolbox to rotate, move, and resize the hologram so I can place it on my desk. Here’s the final version:

It’s no longer jerky, those smooth shaders and much nicer to look at, and the ‘stick’s look good too! Now the next step will be hopefully adding a whole load more molecules, perhaps being able to visualize relative sizes and how they interact. As it turns out when I export this prefab as an AssetBundle, the file size is less than 1MB… should make for a fast load time from an online server!


Behind the scenes: HoloLens Photogrammetry

A few years back, the Earth Science building at WashU was dedicated and the wealthy donor, to which the building now bears his name, generously gave the department a number of valuable rock samples. Some of them are precious and fragile enough that they are not on display in the cabinets in the entrance hallway which seems a shame. However, it could provide a good project to learn to do photogrammetry a little better!

In the lab we set up a light box where we can place the samples on to a turntable so we can rotate without having to move the camera. I also put a couple of scale bars on the turntable that we can use as reference to scale the object correctly during the photogrammetry steps. After each photo I rotated the turntable about 10 degrees so about 36 photos per lap, and then I did 2 more laps at different angles in order to get plenty of coverage. I would normally use a remote with the camera and spend a little time getting the camera levels correct, but I didn’t have a remote with me and I just wanted to use this as an example anyway.

Screenshot of Photoscan – showing the point cloud, camera locations (blue squares), and marker locations along the scale bars.

Once the files are loaded into the photogrammetry software (Agisoft Photoscan), the first thing to do is to add markers to the photos at known places on the scale bars and set their location (1cm intervals). Doing this on a bunch of photos will allow the software to correctly scale the final model. The software then does a lot of the heavy lifting, aligning the cameras, calculating common points and also adding color to the vertices. After a couple more steps you end up with a textured, 3D mesh of the rock sample.

The final textured model. The change in level of detail is down to the camera being out of focus when covering that area… also probably used too large of an aperture!

Now we have our 3D model of the rock sample, we could just leave it there, stuck on a 2d monitor in perpetuity… but we have a HoloLens! Exporting an FBX file and a 4k PNG texture to Unity is relatively straightforward. Unity’s coordinate system is different from Photoscan’s so I need to rotate the model and place it at the correct point in the scene. I also noticed that my scale was now off by a factor of 100, this is probably to do with the Photoscan markers’ units being in cm.

Scene set up in Unity

There’s also just been a release of some new scripts on Microsoft’s Mixed Reality GitHub which I thought would be good to test out. They’ve added some handy manipulation tools which I’ve added to this scene. I’m not sure whether they’ll be easily transferable to the current apps I’ve been developing as they seem a little buggy at the moment. This scene then gets deployed to the HoloLens and I can test out how well Photoscan was able to recreate the sample:

It turned out pretty well I think given that it was just a rough set of photos and I didn’t using the highest accuracy settings in Photoscan. Could end up with a nice little holographic rock museum before too long!

Under the Dome

A slight change of format…

Since moving to New York, I’ve been working (in part) with Dr Ben Holtzman on his Sounds of Seismology project. This also encompasses an ongoing series where we present these sounds and visualizations in the Hayden Planetarium in the American Museum of Natural History in what we call “SeismoDome”! Here’s the video National Geographic did on it:

The bonus of the planetarium is that we get to use a huge, immersive dome to project the visuals, as well as a 23 speaker system to direct sound. During this lecture we take the audience through the Earth from earthquake to seismometer, explaining where earthquakes happen and how data from them can illuminate the interior of our planet. It makes for a spectacular show, the only problem is how to display a planet on something that is supposed to be displaying the sky! This week we had our first test for some new visuals that I’ve been putting together that will hopefully make better use of the dome’s attributes.

The Japan subduction zone defined by earthquake locations in the Hayden Planetarium. Blue colors are shallow events, then red through yellow as the slab descends into the mantle. It’s really hard to take good pictures…

The good news is that the way we’re making the videos will work! Although we do need to adjust viewing angles and put together the movies with the surround sounds, so still plenty to get on with.

A week in St. Louis

It’s always nice to come back to St. Louis… so much cheaper than NYC!

There were no demos this time while I’ve been back which means just getting my head down to make sure that the redesigned Virtual Earth app (where all the models are hosted on a remote server as opposed to being loaded locally on the HoloLens) is working well. We’ve managed to get the shared environment working where multiple people see the same hologram in the same place. This is really important when it comes to having students in here and an instructor can literally point to a hologram and it’s at the same point for all users.

The hosted models have been a revelation. The load time for the app has been drastically reduced and it also allows us to host so many more models. Eventually we could have thousands to choose from!

In the following video you’ll notice that you can also talk to the holograms which allows simple measurements (reasonably accurate for now!) that may be useful for instructors in the future. I’m imagining holo-stratigraphic columns… hmm… we’ll see!


Site Redesign!

So I’ve decided to update my website (it was slightly old-school and some of the work was a little dated!) and thought that I’d amalgamate my own personal with this WordPress site. I like having the blog pages front and center!

Prior to this has been my blog that I started for a field season in Antarctica at the end of 2016, sadly I never fully wrapped it up as things got a little too hectic when I got back to the states.

After AGU in December I caught up with a few of my colleagues that had stayed on after I left. Sadly no more stations were recovered by them and they thought they would have to leave the final 6 seismometers in the field to be picked up next year. Thankfully though, in February, a team of PASSCAL staff were able to make it out to West Antarctica and pick up all the stations! From what I’ve heard from other field teams who were down on the ice this last season, ours was the only one that were able to complete all our objectives for the season. Hopefully next year things will be better for folks in Antarctica.

Back in the warmth… I shall endeavor to keep this page updated with videos, pics and the latest things that I’ve been working on. That means a whole bunch of augmented reality development for the Microsoft HoloLens that I’ve been working on for the Fossett Lab at WashU!

Ain’t nothing but a hole-digger…

So the majority of the time at camp was actually spent away from it at seismic stations that needed to be dug out, boxed, brought back to camp, and readied to be shipped off the continent. 9 of the sites were near enough to snow-machine to, the rest we have to fly to.

Our closest station to McMurdo was in sight of Erebus.

All the sites are pretty similar. There’s a solar panel for power (also with a GPS unit and Xeos transmitter that allows us to monitor the station’s state-of-health throughout the year), a large orange box that contains the datalogger and batteries (this weighs 200lb and is buried about 4 feet down), and a sensor that is buried about 6 feet down. Needless to say that’s a lot of digging, but after the first few stations everyone gets into a rhythm of what to do and usually everything’s finished in about 2 hours.

This slideshow requires JavaScript.

Photos C/O Glenn McClure

After all the equipment is extracted from the ground, we load up the sleds/plane and head off to the next site and start digging all over again! Usually a team of 4 or 5 can get two stations done in a day. Once we return to camp (and rest up sore backs and wrists!), the data gets downloaded from the datalogger, and backed up multiple times. The data is why we’re there and all this would be for nothing if we lose it.

The aftermath!

I’m now back Stateside, so information as to the current state of the deployment is limited, however I’ll try and get an update as to what has happened over the last week in the next post.


Life on the ice II (deep field camping)

tl;dr: it’s cold

MISSION UPDATE (still sticking with it for some reason!): The planned pull out of the science team from camp was today (Monday 21st Nov)…sadly it was cancelled so we’ll try again tomorrow. We’ve also had no opportunity to go and get more stations this last weekend due to the weather setting in at McMurdo over the weekend.

Snow/ice camping really shouldn’t be daunting to an experienced camper. So long as you have the right equipment (which we do down here) then you are generally comfortable. You really just have to accept that you won’t be warm unless you are wearing a lot of clothes!

“Tent City”: many, many mountain tents!

Often the first thing you do when you’re dropped off, literally in the middle of nowhere, is set up a shelter before the plane takes off and heads back to base. This is usually in the form of erecting a Scott tent (a century of technology really hasn’t improved on this design for quick and sturdy shelter). In our case, our camp was coined “Yesterday Camp” due to its proximity to the International Date Line (-78.9723˚S, -179.8868˚W)… the other side from McMurdo Station (-77.85˚S, 166.6667˚E).

Personal tents are usually of the form of mountain tents that are much warmer and comfortable than the Scott tent that, due to its removable floor, usually gets repurposed as an outhouse! Then there are communal tents used for cooking, eating, communications and science (in our case, storing computers for downloading data from the seismic station hard drives). This time we had heaters in a couple of tents which made any down time way more comfortable!

Morning meeting 7am sharp….ish! (photo C/O Glenn McClure)

The routine at camp generally starts with breakfast at about 6.45am with a morning meeting discussing the day’s plans at about 7. If the weather’s good and there’s a plane at camp then we can fly and that takes priority. Ideally we want to get wheels off by 8.30am to make the most of the day and get as many stations dug up as possible. The sites that we can snow-machine to are much less dependent on weather conditions and many of them are less than 20km from camp (driving about an hour or so). Lunch is generally of the “packed” variety and it’s obviously a good idea to bring plenty of water in a thermos. Even though it’s cold, it can get pretty hot and sweaty digging 6-foot holes in the ground all day, plus the air is so dry it’s really easy to get dehydrated. I’ll try and cover the actual station recovery in another post otherwise this one will be way too long!


By the time we get back to camp there’s usually a bit of time to relax, grab a hot drink, and rest any aching muscles before dinner at 6pm. Dinner is usually pretty high in calories, plenty of carbs and fat to help keep the body’s engine warm. Mostly it’s all food that can be frozen or is canned, fresh fruit and veg is pretty much out I’m afraid. We do have a plentiful supply of beer and chocolate though so it’s not all bad! Granted most of it is well past its sell-by date…

Ice frisbee

If the weather is nice and the winds are light, it really can be a pleasant experience to be out in the middle of nowhere, away from it all…oh and the American election…ugh. We brought with us some entertainment with some cross-country skis, board games, a frisbee and even a croquet set! Sadly we didn’t get to pay croquet while I was there…it’s not the flattest of ground so it would have been interesting.