HiRISE and shine!

About a month ago I spotted on twitter a link to a website AreoBrowser. The site showed off the open digital terrain models (DTMs) generated by the Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment (HiRISE) and open to download from the Univeristy of Arizona HiRISE website. AreoBrowser looked like a really useful tool to browse the HiRISE catalog – but wouldn’t it be great to see these models in the HoloLens? After a quick conversation with the creater of AreoBrowser, Matt Brearley, I came up with a plan for how to get all 557 models downloaded, process the meshes to a usable number of triangles, and finally converted to a format that could be served up wirelessly to a HoloLens application. These should really be in the format of Unity AssetBundles as these are compressed files that can be loaded quickly at runtime on the HoloLens.


The first step, downloading the data, should be straightforward enough. Only the mesh data needs to be downloaded, the texture jpegs could stay online and can be referenced from within the HoloLens app. A wget command should do the trick.

Luckily the HiRISE team are up on visualizations and they’ve developed an add-on to Blender to allow for the import of the .IMG DTM files. Once imported in to Blender it was clear that these meshes were going to be way too big for the HoloLens’s limited processing power. Not to worry, a quick decimation and a good quality texture should make things look not too bad. After a brief foray into Blender python coding, all the .IMG files were decimated to a similar level and converted to .blend files which can be read by Unity. I probably should have used .OBJ … but here we are.

The .blend files are fine, but they’re uncompressed and take a little while to load into Unity at runtime (after the app has already started), and besides it would require using some third-party code. Preloading 500+ models is really not feasible (we’re still at GBs of data), the app would take forever to start. Bad idea. Unfortunately, bulk file conversion and compression isn’t really what Unity is made for. I wasn’t really ready to go through each model individually, add a unique filename, and compress each file in to a Unity AssetBundle. Thankfully, there are enough programmable steps that can be hacked together to import, save and compress the models which will make things more manageable.

Great, all the models are processed and can be uploaded to a public server (we’re upgrading to Microsoft Azure storage) ready to be loaded up on the HoloLens, and in particular, our public GeoXplorer application. Up until now I’ve really only had a few models to serve up, and they’ve been fairly well-spaced around the Earth. Now we’re dealing with a few-hundred models from a whole other planet. It maybe time to rethink the user interface.

UI Change

I had thought about doing something similar to AreoBrowser and have a search field, but typing on the HoloLens is nigh-on impossible and certainly not ideal for repeat interactions. Besides, being a spatial visualization, it would be a lot nicer to have the icons represented in a spatial coordinate rather than in a list. It’s a bonus that the HiRISE team also provides the latitude and longitude of all the DTMs on their website. This made it straightforward enough to add small icons to locations on a textured sphere that represents Mars in GeoXplorer. But it doesn’t make selecting an individual icon sphere very easy. What may work is selecting multiple icons within an area around the HoloLens cursor. This would limit it to around half a dozen models which can then be displayed as a list.

OK, there’s still a list, but we can maybe improve on this!

Each DTM has its own sphere icon placed at its location on Mars
The cursor gains a circle around it to select a patch of icons, now in blue
On selection of a patch, a set of clickable text appears with the title of each DTM
Selecting a DTM, the app downloads a mesh, and satellite image texture to overlay. This can be moved, scaled and rotated using two-handed manipulation
Alternatively the app can overlay a colored altimetry map of the DTM area showing the vertical scale of the terrain

A video outline these interactions is embedded below:

There are a few issues, notably that the meshes are still a little large (triangle count-wise) and they could be made a little smoother as well. Also when they’re imported into Unity, the origin point is not the center of the model but at the corner. This is best for the texture but not for when it comes to manipulation, it feels like the model is attached to something in the corner and that’s not very intuitive.

GeoXplorer will be updated in the coming weeks.

Gypsum crystals the size of the room!

One of the functions of the Fossett Lab is to provide support and development for the Earth Science labs at WashU. This has been somewhat on the back-burner until we knew the capabilities of the devices we had, and had our own infrastructure in place to cater adequately to researchers. I think I’ve mentioned before that the space we inhabit (it’s a windowless room that was painted black… so welcoming) used to house a CAVE. The CAVE was 3 walls and a floor, each about 3m-square that had 3D projectors, the ability to have a few people stand around and look at 3D data, and would often breakdown. Granted, our HoloLenses are pretty buggy and by no means plug-and-play, but at least we don’t have to spend a vast amount on service fees! The CAVE was there to help (primarily planetary) scientists look collaboratively at DEMs (Digital Elevation Models) and other GIS data. So over the last few months I’ve been developing a flexible HoloLens application that will not only display DEMs in a shared environment, but any 3D data a scientist would want to look at.

The first version was pretty simple, and it was basically a clone of the SharingSpawnText included in Microsoft’s HoloToolkit release. We can load a few models, I think keeping it down to 4 does make it a little more manageable, the positions and rotations are shared between devices so the model looks the same for everyone so it almost looks like what the CAVE used to do for DEMs except now we have way more freedom to explore. The video above gives an example where myself and 2 professors from WashU are discussing the Martian DEM in front of us.

So that’s all very well. But it would be so much nicer to have any model, not just the 3 or 4 that are preloaded with the application. So this brings in something similar to my previous post ‘Sharing is Caring’ when I’m loading data from a remote server and sharing it among HoloLens users. Except now I’m dealing with complex mesh surfaces, materials and textures. Thankfully Unity makes things a little easier for me with AssetBundles. These are compressed models that contain each of those three components and contains them all into a single file that can be served up from a web server. Ok sounds straightforward…

I’m not going to go into the nitty-gritty with this, but suffice it to say that it’s been a bit of a struggle. There are still a few bugs to smush, but I think we’re pretty close to essentially being able to serve up a model that the application can read on the fly, download, share, and manipulate. The Fossett Lab has had an undergraduate summer intern working for the last few weeks documenting some of the more precious mineral samples in our collection, even stuff that we don’t have on display. She’s done this using photogrammetry (pretty much like my post ‘Behind the Scenes: HoloLens Photogrammetry’) which has added a few complications with some of the shiny/translucent samples but we now have about 30 good-looking models. As there’s so many it really means we can’t preload them into the app, and remembering 30 distinct voice commands seems a little unreasonable so I coded up a menu of buttons that automatically gets filled at startup. Here’s me and Anna testing things out:

One of the initial two samples (the pink one) is gypsum in its ‘desert rose’ morphology which, as you can imagine, is pretty fragile and you don’t really want to be handling it too much. But now in AR we can blow up crystals to the size of the room, or even throw them across the room if we want!

Now I just need to make the interface a little more usable, and then allow access to all those outcrop models I’ve made, oh and all the DEMs, and atomic structures, and protein molecules… and there’s probably so many other things as well!


Sharing is caring

This has been a busy couple of months, a lot of time in February was spent on a plane and as such it felt like I was getting behind on a bunch of things. It really took a couple of weeks to get back into the swing of things and pushing forward again.

The Fossett Lab at WashU is making bigger strides now. I’m into my 2nd year there and it feels like we can start reliably running classes for small groups. We’ve had a couple of mineralogy labs in, and the undergrads that went to the Azores did some testing of how well our outcrop models may help (or not!). We have a CS undergrad, who is kindly working for us off of a fellowship, that is making great strides at getting our web platform up and running and integrated with our HoloLens apps. Microsoft have also been involved, helpfully sending 2 techs to see us and discuss what they can do for us (probably quite a lot given their resources). It has also been a time to reflect on the work I’ve done in building the applications. They were really a first attempt, layered on with gradually better written codes and graphics. Best to break things down and rebuild.

Learning mineral structures from a different point of view
Prepping for a trip to the Azores

Well mostly, there are some good things to keep. But there is one glaring omission: sharing holographic experiences. It’s a part of the AR experience that is key for any instructor wanting to guide students along through a set of examples or tasks. So far we had been able to make sure that the holograms  appeared in the same place for everyone with a HoloLens on, but there was no communication between the devices. If 1 person clicked on a button, the rest would have to do the same. Wouldn’t it be better if the instructor could click a button and it automatically clicked for everyone? Or even rotating and moving holograms – this has been elusive.

Up till now….

The way we have to set up our applications is that a lot of the three-dimensional models that will appear as holograms are stored not on each HoloLens, but on a remote server that the HoloLens can reach with a simple URL (just like typing in an address in a web browser). My thinking was always that I could download the model onto 1 HoloLens, and then broadcast the model onto the rest of the connected headsets. This is a really inefficient way of doing what I wanted, and besides, without having to disable firewalls and other security protections it would be impossible to do that. Materials and textures would have to be copied across and synced up, along with the positions of many parts to what can be fairly complex holograms.

Things were compounded by any documentation or help for sharing holograms being pretty thin on the ground. There are some examples on Microsoft’s web pages but really it was a pretty straightforward case, and it isn’t clear how to adapt the codes for my own use. As such I’d put off, and put off…. and put off really tackling this problem.

I finally figured that instead of doing what I had planned, I could just send a signal (a simple ‘yes go download this’ or ‘go download that’) to all the HoloLenses connected to download the model separately and put that model within another object that I could share the position and (if needed) rotation of. When I say ‘within another object’, that object can be completely transparent, blank, nothing, nada but a singular point that has a position, rotation and size in three-dimensional space. I feel I may have lost some people! But suffice it to say, that I think this should solve all my problems. I’ve worked out how to share simple boolians (either true or false) which is great for signalling whether or not a button has been pressed and from that I can signal that a model should be downloaded. This may seem like baby steps for coding experts, but it seems like I’ve learned a lot in the last couple of months! Maybe I’ll write some better documentation…

Bringing that into the applications that we’re most far along with. Our mineral structure app is now mostly rebuilt and a convenient ‘Sync’ button has been added to make it easy for instructors (see the above video). All the models are now on the remote server (which means a lot less laggy-ness at startup), and on that front, uploading new mineral structures is a doddle and requires no extra faffing around in Unity on my part. The outcrop application is likely going to be a little harder to implement what I’ve learned. However, that is in the process of being stripped down and built back up again better and more flexible that ever! I’ve also incorporated a couple of online repositories of imagery (NASA’s Global Imagery Browse Services) and data (USGS earthquake locations) so this should provide a much broader range of geologic and planetary information. By the time I’m done with them, they’ll feel much different and much more inclusive.

I think I’m probably going to have just gotten good at all this and then they’ll change everything…

Getting adventurous…

Up until now I’ve been trying to just get content in to the HoloLens environment: producing 3D models in various guises and importing them in to Unity or throwing them up on to a server for the HoloLens to fetch.

The next stage is telling the world (think big?!) what we’re doing, and that means trying to display what is essentially a 1-person projector, to a whole room of people. Conveniently the HoloLens allows you to stream live what someone is seeing through the device over a WiFi connection. The tiny webcam-like camera sits on the HoloLens and combines an image with the holograms on display. It does a nice enough job for a quick demo, but it’s pretty lo-resolution and grainy so we should think about making something a little more polished.

About a year or so ago, a group of HoloLens develops came up with a way of solving this issue by upgrading the live stream optics by way of jury rigging a HoloLens on top of a DSLR camera. I had a look through the tutorials back then and chalked it off a something fun to do, but well above my competence level at the time. However, these last few months I’ve been chipping away at getting things set up and today… success!

Firstly I took stock of what we had and what was going to be any major expenditures. Thankfully the lab had bought a nice new DSLR (Nikon 5300) that we use for field work so no spending needed there. The lab desktop was lacking in an HDMI-in so we had to splash out on a capture card,  we used the one listed on the Github site which had somewhat mixed reviews, but in hindsight I can say I’ve had no issues getting it up and running. The software SDKs needed were free downloads (woo!), so the final conundrum was going to be how to attach the HoloLens to the DSLR? The tutorial mention an elaborate setup involving both 3D printed and machined parts which I’m sure make for a very secure housing for the expensive HoloLens. Seemed a little over-elaborate for me… to the interwebs! I stumbled across this 3D printable model that I ended up sending to the library at Columbia:

Turns out most universities have a 3D printer lying around and they’ll print things like this for free!

This did the job nicely.

So, all the hardware is sorted, the capture card works well and live-streams through the HDMI-out from the camera to the desktop. Now we need to calibrate things as the software needs to composite images from 2 cameras (the DSLR and the HoloLens) that are in different locations. Calibration time!

After taking about 50 photos like this, you’d look bored too…

This process spits out a text file that provides the transfer information for the images to be combined and overlain so the holograms appear in the right positions.

The final step is incorporating this within an application. I managed to work out how to share holograms a while back, this is a requirement for this approach although there are now multiple ways to share holographic environments now between HoloLenses. I’m going to use my trusty Virtual Earth application that has a hi-res image of the Earth overlain on a sphere and can be shared among a group for class uses. Miraculously, on firing everything up for the first time (well not the first time, as I didn’t read the instructions properly and booted things up in the wrong order…!) it worked! So many more pixels on my holographic image captures:

Before just using the HoloLens’s ‘webcam’
After using the DSLR composite from exactly the same position. The hologram is way more solid-looking and much higher resolution – plus the background is much clearer.
The outcrops look pretty good too: here’s one created from drone imagery

And it even works for movies too:

1080p’s of relief. Only took a few months of getting all these things put together and having the confidence to try it. Next job will be getting holograms to sync up so that the holograms will be much more dynamic on these videos!


1 year in…

The last couple of months have been a little busy. From back to working on some research from the Whillans Ice Stream in Antarctica, to continuing the development of the Fossett Labs applications, it’s been a bit hectic. Throw in a SeismoDome planetarium show (which was a hit and we may have some shows outside of NYC) and there hasn’t been all that much time to write any blogposts!

SeismoDome at the Hayden Planetarium

The annual AGU meeting in New Orleans is upon us and preparation is well underway for presenting a demonstration of the Fossett Lab’s work for the scientific community. Given the expected unreliability of the WiFi at AGU, I’ve been putting together a standalone demo app that should showoff a variety of different topics we’ve been working on: outcrops, atomic structures and earthquake catalogs. We’re hoping that the community will be generous enough to help us grow the content as the HoloLens-side application is pretty much ready for it.

Also in prep for AGU, my GitHub site has gotten some much-needed attention and I’m starting to upload a few of the applications. It’s still pretty rough but the essentials will be up there. I’ve also bought business cards for the first time ever…

This last year has been a lot of learning for me. Designing user experiences (UX) was not really part of my Ph.D. but it has been a lot of fun. It’s also been mostly solo work and a lot of self-learning, so it’s a relief to introduce a couple of undergraduates into the lab in the new year: one working on importing the Rudolph rock collection into a holographic environment, and one working more closely with me the design of Earth Science applications (and hopefully fixing a lot of my poorly-written code!). Bring on 2018!

Sound and structure

Earthquakes make a noise… they’re just so low frequency that we can’t hear them. Our ears are excellent at picking out strange things or patterns within what would look like noise if you displayed the waveforms visually. So can we use sound to uncover more information within our data than by current data analysis methods? This is how the Seismic Sound Lab started.

Certainly sound adds a more immersive feel to any data visualization. This last month, in the run up to Lamont’s Open House, I’ve been working on developing new visuals for the Sound Lab using Unity 3D. This game engine allows for much more dynamic visuals, and the ability to fly inside a subduction zone. It’s been a bit of a challenge to get the visuals right (they’re still not as good as I want them to be!) but we’ve put together a demo for 17 years of Chilean seismicity that begins to show the capabilities of using gaming software for data viz.

Catalog movie for 17 years of Chilean earthquakes. The surface of the Earth is see-through, the circle at the center is the Earth’s outer core.

The 3-dimensional aspect allows us to show off structures illuminated by earthquake locations. We’re also developing visuals for a number of other significant earthquakes: Parkfield, Sumatra, Haiti, and Tohoku. The later we’re also developing 3D ground motion visualizations and sounds utilizing the Transportable Array. The sounds are still in the works but here’s a preview showing where the TA stations in the Midwest of the USA:

Obviously all the visuals will be developed for the planetarium show! The flythroughs of the subduction zones look really good in the dome, which unfortunately I can’t really show you the full effect here on computer screen, but here’s the general idea:

Lots to do still!


Protein (holo) synthesis

3D holographic models are all well and good, but where the technology may provide an edge are with models that are impossible to physically build (3D printing has come a long way, but it has yet to print out objects that can hover!). This became more apparent to me when asked if I could see if we could visualize a protein molecule using the HoloLens.

Now, being a geophysicist by trade, I had no idea where to begin and then what to load up that would be useful! I’m still a little unsure about what will be most beneficial to researchers and educators, but I think the implementation of holographic models in this instance could really help.

There is software that can represent proteins and other biological structures in 3D out there on the market, and I was pointed towards PyMol as being a good place to start. Ok, so I could get a molecule visualized in 3D, could it be rendered on the HoloLens or would it be too much information?

First up, I needed to convert the file format to something Unity could understand. This wasn’t trivial as although PyMol can export COLLADA .dae format, when reading these into Unity the colors (which can be important) are not retained. Shifting to Plan B, I loaded a .wrl exported from PyMol into Blender. This seemed to work, all the colors are retained and when I opened up the .blend file in Unity all the colors are present and correct. I loaded up an application on the HoloLens and took a look-see:

Nice, DNA strings and all! Now that was fine but some things needed tweeking. Firstly, the spheres are probably not the best to visualize the organic chains and some of the elements are overlain by others, so I’ll change them to PyMol’s ‘stick’ representation. Secondly, and slightly more importantly, the model was incredibly juddery, the loss of high frame rate is a symptom of trying to render too much information at once and leads to a lousy experience. Best to try and decimate the mesh a little. Lastly its a static hologram, we can do better than that!

Back in Blender I can see if there’s duplicate vertices that can be deleted. This is often the case when models are being generated by some scientific software that isn’t really optimized for this sort of thing. Yes, there was a whole bunch of duplicates so that will help a lot with reducing the file size and the rendering. While I was in Blender I also sorted out the shader so that the surfaces appeared smooth as opposed to faceted in the video above.

Back in Unity I loaded up the improved model (much faster!) and designed a new manipulation toolbox to rotate, move, and resize the hologram so I can place it on my desk. Here’s the final version:

It’s no longer jerky, those smooth shaders and much nicer to look at, and the ‘stick’s look good too! Now the next step will be hopefully adding a whole load more molecules, perhaps being able to visualize relative sizes and how they interact. As it turns out when I export this prefab as an AssetBundle, the file size is less than 1MB… should make for a fast load time from an online server!