Gypsum crystals the size of the room!

One of the functions of the Fossett Lab is to provide support and development for the Earth Science labs at WashU. This has been somewhat on the back-burner until we knew the capabilities of the devices we had, and had our own infrastructure in place to cater adequately to researchers. I think I’ve mentioned before that the space we inhabit (it’s a windowless room that was painted black… so welcoming) used to house a CAVE. The CAVE was 3 walls and a floor, each about 3m-square that had 3D projectors, the ability to have a few people stand around and look at 3D data, and would often breakdown. Granted, our HoloLenses are pretty buggy and by no means plug-and-play, but at least we don’t have to spend a vast amount on service fees! The CAVE was there to help (primarily planetary) scientists look collaboratively at DEMs (Digital Elevation Models) and other GIS data. So over the last few months I’ve been developing a flexible HoloLens application that will not only display DEMs in a shared environment, but any 3D data a scientist would want to look at.

The first version was pretty simple, and it was basically a clone of the SharingSpawnText included in Microsoft’s HoloToolkit release. We can load a few models, I think keeping it down to 4 does make it a little more manageable, the positions and rotations are shared between devices so the model looks the same for everyone so it almost looks like what the CAVE used to do for DEMs except now we have way more freedom to explore. The video above gives an example where myself and 2 professors from WashU are discussing the Martian DEM in front of us.

So that’s all very well. But it would be so much nicer to have any model, not just the 3 or 4 that are preloaded with the application. So this brings in something similar to my previous post ‘Sharing is Caring’ when I’m loading data from a remote server and sharing it among HoloLens users. Except now I’m dealing with complex mesh surfaces, materials and textures. Thankfully Unity makes things a little easier for me with AssetBundles. These are compressed models that contain each of those three components and contains them all into a single file that can be served up from a web server. Ok sounds straightforward…

I’m not going to go into the nitty-gritty with this, but suffice it to say that it’s been a bit of a struggle. There are still a few bugs to smush, but I think we’re pretty close to essentially being able to serve up a model that the application can read on the fly, download, share, and manipulate. The Fossett Lab has had an undergraduate summer intern working for the last few weeks documenting some of the more precious mineral samples in our collection, even stuff that we don’t have on display. She’s done this using photogrammetry (pretty much like my post ‘Behind the Scenes: HoloLens Photogrammetry’) which has added a few complications with some of the shiny/translucent samples but we now have about 30 good-looking models. As there’s so many it really means we can’t preload them into the app, and remembering 30 distinct voice commands seems a little unreasonable so I coded up a menu of buttons that automatically gets filled at startup. Here’s me and Anna testing things out:

One of the initial two samples (the pink one) is gypsum in its ‘desert rose’ morphology which, as you can imagine, is pretty fragile and you don’t really want to be handling it too much. But now in AR we can blow up crystals to the size of the room, or even throw them across the room if we want!

Now I just need to make the interface a little more usable, and then allow access to all those outcrop models I’ve made, oh and all the DEMs, and atomic structures, and protein molecules… and there’s probably so many other things as well!

 

Advertisements

Sharing is caring

This has been a busy couple of months, a lot of time in February was spent on a plane and as such it felt like I was getting behind on a bunch of things. It really took a couple of weeks to get back into the swing of things and pushing forward again.

The Fossett Lab at WashU is making bigger strides now. I’m into my 2nd year there and it feels like we can start reliably running classes for small groups. We’ve had a couple of mineralogy labs in, and the undergrads that went to the Azores did some testing of how well our outcrop models may help (or not!). We have a CS undergrad, who is kindly working for us off of a fellowship, that is making great strides at getting our web platform up and running and integrated with our HoloLens apps. Microsoft have also been involved, helpfully sending 2 techs to see us and discuss what they can do for us (probably quite a lot given their resources). It has also been a time to reflect on the work I’ve done in building the applications. They were really a first attempt, layered on with gradually better written codes and graphics. Best to break things down and rebuild.

Learning mineral structures from a different point of view
Prepping for a trip to the Azores

Well mostly, there are some good things to keep. But there is one glaring omission: sharing holographic experiences. It’s a part of the AR experience that is key for any instructor wanting to guide students along through a set of examples or tasks. So far we had been able to make sure that the holograms  appeared in the same place for everyone with a HoloLens on, but there was no communication between the devices. If 1 person clicked on a button, the rest would have to do the same. Wouldn’t it be better if the instructor could click a button and it automatically clicked for everyone? Or even rotating and moving holograms – this has been elusive.

Up till now….

The way we have to set up our applications is that a lot of the three-dimensional models that will appear as holograms are stored not on each HoloLens, but on a remote server that the HoloLens can reach with a simple URL (just like typing in an address in a web browser). My thinking was always that I could download the model onto 1 HoloLens, and then broadcast the model onto the rest of the connected headsets. This is a really inefficient way of doing what I wanted, and besides, without having to disable firewalls and other security protections it would be impossible to do that. Materials and textures would have to be copied across and synced up, along with the positions of many parts to what can be fairly complex holograms.

Things were compounded by any documentation or help for sharing holograms being pretty thin on the ground. There are some examples on Microsoft’s web pages but really it was a pretty straightforward case, and it isn’t clear how to adapt the codes for my own use. As such I’d put off, and put off…. and put off really tackling this problem.

I finally figured that instead of doing what I had planned, I could just send a signal (a simple ‘yes go download this’ or ‘go download that’) to all the HoloLenses connected to download the model separately and put that model within another object that I could share the position and (if needed) rotation of. When I say ‘within another object’, that object can be completely transparent, blank, nothing, nada but a singular point that has a position, rotation and size in three-dimensional space. I feel I may have lost some people! But suffice it to say, that I think this should solve all my problems. I’ve worked out how to share simple boolians (either true or false) which is great for signalling whether or not a button has been pressed and from that I can signal that a model should be downloaded. This may seem like baby steps for coding experts, but it seems like I’ve learned a lot in the last couple of months! Maybe I’ll write some better documentation…

Bringing that into the applications that we’re most far along with. Our mineral structure app is now mostly rebuilt and a convenient ‘Sync’ button has been added to make it easy for instructors (see the above video). All the models are now on the remote server (which means a lot less laggy-ness at startup), and on that front, uploading new mineral structures is a doddle and requires no extra faffing around in Unity on my part. The outcrop application is likely going to be a little harder to implement what I’ve learned. However, that is in the process of being stripped down and built back up again better and more flexible that ever! I’ve also incorporated a couple of online repositories of imagery (NASA’s Global Imagery Browse Services) and data (USGS earthquake locations) so this should provide a much broader range of geologic and planetary information. By the time I’m done with them, they’ll feel much different and much more inclusive.

I think I’m probably going to have just gotten good at all this and then they’ll change everything…

Getting adventurous…

Up until now I’ve been trying to just get content in to the HoloLens environment: producing 3D models in various guises and importing them in to Unity or throwing them up on to a server for the HoloLens to fetch.

The next stage is telling the world (think big?!) what we’re doing, and that means trying to display what is essentially a 1-person projector, to a whole room of people. Conveniently the HoloLens allows you to stream live what someone is seeing through the device over a WiFi connection. The tiny webcam-like camera sits on the HoloLens and combines an image with the holograms on display. It does a nice enough job for a quick demo, but it’s pretty lo-resolution and grainy so we should think about making something a little more polished.

About a year or so ago, a group of HoloLens develops came up with a way of solving this issue by upgrading the live stream optics by way of jury rigging a HoloLens on top of a DSLR camera. I had a look through the tutorials back then and chalked it off a something fun to do, but well above my competence level at the time. However, these last few months I’ve been chipping away at getting things set up and today… success!

Firstly I took stock of what we had and what was going to be any major expenditures. Thankfully the lab had bought a nice new DSLR (Nikon 5300) that we use for field work so no spending needed there. The lab desktop was lacking in an HDMI-in so we had to splash out on a capture card,  we used the one listed on the Github site which had somewhat mixed reviews, but in hindsight I can say I’ve had no issues getting it up and running. The software SDKs needed were free downloads (woo!), so the final conundrum was going to be how to attach the HoloLens to the DSLR? The tutorial mention an elaborate setup involving both 3D printed and machined parts which I’m sure make for a very secure housing for the expensive HoloLens. Seemed a little over-elaborate for me… to the interwebs! I stumbled across this 3D printable model that I ended up sending to the library at Columbia:

Turns out most universities have a 3D printer lying around and they’ll print things like this for free!

This did the job nicely.

So, all the hardware is sorted, the capture card works well and live-streams through the HDMI-out from the camera to the desktop. Now we need to calibrate things as the software needs to composite images from 2 cameras (the DSLR and the HoloLens) that are in different locations. Calibration time!

After taking about 50 photos like this, you’d look bored too…

This process spits out a text file that provides the transfer information for the images to be combined and overlain so the holograms appear in the right positions.

The final step is incorporating this within an application. I managed to work out how to share holograms a while back, this is a requirement for this approach although there are now multiple ways to share holographic environments now between HoloLenses. I’m going to use my trusty Virtual Earth application that has a hi-res image of the Earth overlain on a sphere and can be shared among a group for class uses. Miraculously, on firing everything up for the first time (well not the first time, as I didn’t read the instructions properly and booted things up in the wrong order…!) it worked! So many more pixels on my holographic image captures:

Before just using the HoloLens’s ‘webcam’
After using the DSLR composite from exactly the same position. The hologram is way more solid-looking and much higher resolution – plus the background is much clearer.
The outcrops look pretty good too: here’s one created from drone imagery

And it even works for movies too:

1080p’s of relief. Only took a few months of getting all these things put together and having the confidence to try it. Next job will be getting holograms to sync up so that the holograms will be much more dynamic on these videos!

~M

1 year in…

The last couple of months have been a little busy. From back to working on some research from the Whillans Ice Stream in Antarctica, to continuing the development of the Fossett Labs applications, it’s been a bit hectic. Throw in a SeismoDome planetarium show (which was a hit and we may have some shows outside of NYC) and there hasn’t been all that much time to write any blogposts!

SeismoDome at the Hayden Planetarium

The annual AGU meeting in New Orleans is upon us and preparation is well underway for presenting a demonstration of the Fossett Lab’s work for the scientific community. Given the expected unreliability of the WiFi at AGU, I’ve been putting together a standalone demo app that should showoff a variety of different topics we’ve been working on: outcrops, atomic structures and earthquake catalogs. We’re hoping that the community will be generous enough to help us grow the content as the HoloLens-side application is pretty much ready for it.

Also in prep for AGU, my GitHub site has gotten some much-needed attention and I’m starting to upload a few of the applications. It’s still pretty rough but the essentials will be up there. I’ve also bought business cards for the first time ever…

This last year has been a lot of learning for me. Designing user experiences (UX) was not really part of my Ph.D. but it has been a lot of fun. It’s also been mostly solo work and a lot of self-learning, so it’s a relief to introduce a couple of undergraduates into the lab in the new year: one working on importing the Rudolph rock collection into a holographic environment, and one working more closely with me the design of Earth Science applications (and hopefully fixing a lot of my poorly-written code!). Bring on 2018!

Sound and structure

Earthquakes make a noise… they’re just so low frequency that we can’t hear them. Our ears are excellent at picking out strange things or patterns within what would look like noise if you displayed the waveforms visually. So can we use sound to uncover more information within our data than by current data analysis methods? This is how the Seismic Sound Lab started.

Certainly sound adds a more immersive feel to any data visualization. This last month, in the run up to Lamont’s Open House, I’ve been working on developing new visuals for the Sound Lab using Unity 3D. This game engine allows for much more dynamic visuals, and the ability to fly inside a subduction zone. It’s been a bit of a challenge to get the visuals right (they’re still not as good as I want them to be!) but we’ve put together a demo for 17 years of Chilean seismicity that begins to show the capabilities of using gaming software for data viz.

Catalog movie for 17 years of Chilean earthquakes. The surface of the Earth is see-through, the circle at the center is the Earth’s outer core.

The 3-dimensional aspect allows us to show off structures illuminated by earthquake locations. We’re also developing visuals for a number of other significant earthquakes: Parkfield, Sumatra, Haiti, and Tohoku. The later we’re also developing 3D ground motion visualizations and sounds utilizing the Transportable Array. The sounds are still in the works but here’s a preview showing where the TA stations in the Midwest of the USA:

Obviously all the visuals will be developed for the planetarium show! The flythroughs of the subduction zones look really good in the dome, which unfortunately I can’t really show you the full effect here on computer screen, but here’s the general idea:

Lots to do still!

~Martin

Protein (holo) synthesis

3D holographic models are all well and good, but where the technology may provide an edge are with models that are impossible to physically build (3D printing has come a long way, but it has yet to print out objects that can hover!). This became more apparent to me when asked if I could see if we could visualize a protein molecule using the HoloLens.

Now, being a geophysicist by trade, I had no idea where to begin and then what to load up that would be useful! I’m still a little unsure about what will be most beneficial to researchers and educators, but I think the implementation of holographic models in this instance could really help.

There is software that can represent proteins and other biological structures in 3D out there on the market, and I was pointed towards PyMol as being a good place to start. Ok, so I could get a molecule visualized in 3D, could it be rendered on the HoloLens or would it be too much information?

First up, I needed to convert the file format to something Unity could understand. This wasn’t trivial as although PyMol can export COLLADA .dae format, when reading these into Unity the colors (which can be important) are not retained. Shifting to Plan B, I loaded a .wrl exported from PyMol into Blender. This seemed to work, all the colors are retained and when I opened up the .blend file in Unity all the colors are present and correct. I loaded up an application on the HoloLens and took a look-see:

Nice, DNA strings and all! Now that was fine but some things needed tweeking. Firstly, the spheres are probably not the best to visualize the organic chains and some of the elements are overlain by others, so I’ll change them to PyMol’s ‘stick’ representation. Secondly, and slightly more importantly, the model was incredibly juddery, the loss of high frame rate is a symptom of trying to render too much information at once and leads to a lousy experience. Best to try and decimate the mesh a little. Lastly its a static hologram, we can do better than that!

Back in Blender I can see if there’s duplicate vertices that can be deleted. This is often the case when models are being generated by some scientific software that isn’t really optimized for this sort of thing. Yes, there was a whole bunch of duplicates so that will help a lot with reducing the file size and the rendering. While I was in Blender I also sorted out the shader so that the surfaces appeared smooth as opposed to faceted in the video above.

Back in Unity I loaded up the improved model (much faster!) and designed a new manipulation toolbox to rotate, move, and resize the hologram so I can place it on my desk. Here’s the final version:

It’s no longer jerky, those smooth shaders and much nicer to look at, and the ‘stick’s look good too! Now the next step will be hopefully adding a whole load more molecules, perhaps being able to visualize relative sizes and how they interact. As it turns out when I export this prefab as an AssetBundle, the file size is less than 1MB… should make for a fast load time from an online server!

~Martin

Behind the scenes: HoloLens Photogrammetry

A few years back, the Earth Science building at WashU was dedicated and the wealthy donor, to which the building now bears his name, generously gave the department a number of valuable rock samples. Some of them are precious and fragile enough that they are not on display in the cabinets in the entrance hallway which seems a shame. However, it could provide a good project to learn to do photogrammetry a little better!

In the lab we set up a light box where we can place the samples on to a turntable so we can rotate without having to move the camera. I also put a couple of scale bars on the turntable that we can use as reference to scale the object correctly during the photogrammetry steps. After each photo I rotated the turntable about 10 degrees so about 36 photos per lap, and then I did 2 more laps at different angles in order to get plenty of coverage. I would normally use a remote with the camera and spend a little time getting the camera levels correct, but I didn’t have a remote with me and I just wanted to use this as an example anyway.

Capture1
Screenshot of Photoscan – showing the point cloud, camera locations (blue squares), and marker locations along the scale bars.

Once the files are loaded into the photogrammetry software (Agisoft Photoscan), the first thing to do is to add markers to the photos at known places on the scale bars and set their location (1cm intervals). Doing this on a bunch of photos will allow the software to correctly scale the final model. The software then does a lot of the heavy lifting, aligning the cameras, calculating common points and also adding color to the vertices. After a couple more steps you end up with a textured, 3D mesh of the rock sample.

Capture2
The final textured model. The change in level of detail is down to the camera being out of focus when covering that area… also probably used too large of an aperture!

Now we have our 3D model of the rock sample, we could just leave it there, stuck on a 2d monitor in perpetuity… but we have a HoloLens! Exporting an FBX file and a 4k PNG texture to Unity is relatively straightforward. Unity’s coordinate system is different from Photoscan’s so I need to rotate the model and place it at the correct point in the scene. I also noticed that my scale was now off by a factor of 100, this is probably to do with the Photoscan markers’ units being in cm.

Capture3
Scene set up in Unity

There’s also just been a release of some new scripts on Microsoft’s Mixed Reality GitHub which I thought would be good to test out. They’ve added some handy manipulation tools which I’ve added to this scene. I’m not sure whether they’ll be easily transferable to the current apps I’ve been developing as they seem a little buggy at the moment. This scene then gets deployed to the HoloLens and I can test out how well Photoscan was able to recreate the sample:

It turned out pretty well I think given that it was just a rough set of photos and I didn’t using the highest accuracy settings in Photoscan. Could end up with a nice little holographic rock museum before too long!

Under the Dome

A slight change of format…

Since moving to New York, I’ve been working (in part) with Dr Ben Holtzman on his Sounds of Seismology project. This also encompasses an ongoing series where we present these sounds and visualizations in the Hayden Planetarium in the American Museum of Natural History in what we call “SeismoDome”! Here’s the video National Geographic did on it:

The bonus of the planetarium is that we get to use a huge, immersive dome to project the visuals, as well as a 23 speaker system to direct sound. During this lecture we take the audience through the Earth from earthquake to seismometer, explaining where earthquakes happen and how data from them can illuminate the interior of our planet. It makes for a spectacular show, the only problem is how to display a planet on something that is supposed to be displaying the sky! This week we had our first test for some new visuals that I’ve been putting together that will hopefully make better use of the dome’s attributes.

IMG_0318
The Japan subduction zone defined by earthquake locations in the Hayden Planetarium. Blue colors are shallow events, then red through yellow as the slab descends into the mantle. It’s really hard to take good pictures…

The good news is that the way we’re making the videos will work! Although we do need to adjust viewing angles and put together the movies with the surround sounds, so still plenty to get on with.

A week in St. Louis

It’s always nice to come back to St. Louis… so much cheaper than NYC!

There were no demos this time while I’ve been back which means just getting my head down to make sure that the redesigned Virtual Earth app (where all the models are hosted on a remote server as opposed to being loaded locally on the HoloLens) is working well. We’ve managed to get the shared environment working where multiple people see the same hologram in the same place. This is really important when it comes to having students in here and an instructor can literally point to a hologram and it’s at the same point for all users.

The hosted models have been a revelation. The load time for the app has been drastically reduced and it also allows us to host so many more models. Eventually we could have thousands to choose from!

In the following video you’ll notice that you can also talk to the holograms which allows simple measurements (reasonably accurate for now!) that may be useful for instructors in the future. I’m imagining holo-stratigraphic columns… hmm… we’ll see!

~Martin

Site Redesign!

So I’ve decided to update my website (it was slightly old-school and some of the work was a little dated!) and thought that I’d amalgamate my own personal with this WordPress site. I like having the blog pages front and center!

Prior to this has been my blog that I started for a field season in Antarctica at the end of 2016, sadly I never fully wrapped it up as things got a little too hectic when I got back to the states.

After AGU in December I caught up with a few of my colleagues that had stayed on after I left. Sadly no more stations were recovered by them and they thought they would have to leave the final 6 seismometers in the field to be picked up next year. Thankfully though, in February, a team of PASSCAL staff were able to make it out to West Antarctica and pick up all the stations! From what I’ve heard from other field teams who were down on the ice this last season, ours was the only one that were able to complete all our objectives for the season. Hopefully next year things will be better for folks in Antarctica.

Back in the warmth… I shall endeavor to keep this page updated with videos, pics and the latest things that I’ve been working on. That means a whole bunch of augmented reality development for the Microsoft HoloLens that I’ve been working on for the Fossett Lab at WashU!