One of the functions of the Fossett Lab is to provide support and development for the Earth Science labs at WashU. This has been somewhat on the back-burner until we knew the capabilities of the devices we had, and had our own infrastructure in place to cater adequately to researchers. I think I’ve mentioned before that the space we inhabit (it’s a windowless room that was painted black… so welcoming) used to house a CAVE. The CAVE was 3 walls and a floor, each about 3m-square that had 3D projectors, the ability to have a few people stand around and look at 3D data, and would often breakdown. Granted, our HoloLenses are pretty buggy and by no means plug-and-play, but at least we don’t have to spend a vast amount on service fees! The CAVE was there to help (primarily planetary) scientists look collaboratively at DEMs (Digital Elevation Models) and other GIS data. So over the last few months I’ve been developing a flexible HoloLens application that will not only display DEMs in a shared environment, but any 3D data a scientist would want to look at.
The first version was pretty simple, and it was basically a clone of the SharingSpawnText included in Microsoft’s HoloToolkit release. We can load a few models, I think keeping it down to 4 does make it a little more manageable, the positions and rotations are shared between devices so the model looks the same for everyone so it almost looks like what the CAVE used to do for DEMs except now we have way more freedom to explore. The video above gives an example where myself and 2 professors from WashU are discussing the Martian DEM in front of us.
So that’s all very well. But it would be so much nicer to have any model, not just the 3 or 4 that are preloaded with the application. So this brings in something similar to my previous post ‘Sharing is Caring’ when I’m loading data from a remote server and sharing it among HoloLens users. Except now I’m dealing with complex mesh surfaces, materials and textures. Thankfully Unity makes things a little easier for me with AssetBundles. These are compressed models that contain each of those three components and contains them all into a single file that can be served up from a web server. Ok sounds straightforward…
I’m not going to go into the nitty-gritty with this, but suffice it to say that it’s been a bit of a struggle. There are still a few bugs to smush, but I think we’re pretty close to essentially being able to serve up a model that the application can read on the fly, download, share, and manipulate. The Fossett Lab has had an undergraduate summer intern working for the last few weeks documenting some of the more precious mineral samples in our collection, even stuff that we don’t have on display. She’s done this using photogrammetry (pretty much like my post ‘Behind the Scenes: HoloLens Photogrammetry’) which has added a few complications with some of the shiny/translucent samples but we now have about 30 good-looking models. As there’s so many it really means we can’t preload them into the app, and remembering 30 distinct voice commands seems a little unreasonable so I coded up a menu of buttons that automatically gets filled at startup. Here’s me and Anna testing things out:
One of the initial two samples (the pink one) is gypsum in its ‘desert rose’ morphology which, as you can imagine, is pretty fragile and you don’t really want to be handling it too much. But now in AR we can blow up crystals to the size of the room, or even throw them across the room if we want!
Now I just need to make the interface a little more usable, and then allow access to all those outcrop models I’ve made, oh and all the DEMs, and atomic structures, and protein molecules… and there’s probably so many other things as well!