This has been a busy couple of months, a lot of time in February was spent on a plane and as such it felt like I was getting behind on a bunch of things. It really took a couple of weeks to get back into the swing of things and pushing forward again.
The Fossett Lab at WashU is making bigger strides now. I’m into my 2nd year there and it feels like we can start reliably running classes for small groups. We’ve had a couple of mineralogy labs in, and the undergrads that went to the Azores did some testing of how well our outcrop models may help (or not!). We have a CS undergrad, who is kindly working for us off of a fellowship, that is making great strides at getting our web platform up and running and integrated with our HoloLens apps. Microsoft have also been involved, helpfully sending 2 techs to see us and discuss what they can do for us (probably quite a lot given their resources). It has also been a time to reflect on the work I’ve done in building the applications. They were really a first attempt, layered on with gradually better written codes and graphics. Best to break things down and rebuild.
Well mostly, there are some good things to keep. But there is one glaring omission: sharing holographic experiences. It’s a part of the AR experience that is key for any instructor wanting to guide students along through a set of examples or tasks. So far we had been able to make sure that the holograms appeared in the same place for everyone with a HoloLens on, but there was no communication between the devices. If 1 person clicked on a button, the rest would have to do the same. Wouldn’t it be better if the instructor could click a button and it automatically clicked for everyone? Or even rotating and moving holograms – this has been elusive.
Up till now….
The way we have to set up our applications is that a lot of the three-dimensional models that will appear as holograms are stored not on each HoloLens, but on a remote server that the HoloLens can reach with a simple URL (just like typing in an address in a web browser). My thinking was always that I could download the model onto 1 HoloLens, and then broadcast the model onto the rest of the connected headsets. This is a really inefficient way of doing what I wanted, and besides, without having to disable firewalls and other security protections it would be impossible to do that. Materials and textures would have to be copied across and synced up, along with the positions of many parts to what can be fairly complex holograms.
Things were compounded by any documentation or help for sharing holograms being pretty thin on the ground. There are some examples on Microsoft’s web pages but really it was a pretty straightforward case, and it isn’t clear how to adapt the codes for my own use. As such I’d put off, and put off…. and put off really tackling this problem.
I finally figured that instead of doing what I had planned, I could just send a signal (a simple ‘yes go download this’ or ‘go download that’) to all the HoloLenses connected to download the model separately and put that model within another object that I could share the position and (if needed) rotation of. When I say ‘within another object’, that object can be completely transparent, blank, nothing, nada but a singular point that has a position, rotation and size in three-dimensional space. I feel I may have lost some people! But suffice it to say, that I think this should solve all my problems. I’ve worked out how to share simple boolians (either true or false) which is great for signalling whether or not a button has been pressed and from that I can signal that a model should be downloaded. This may seem like baby steps for coding experts, but it seems like I’ve learned a lot in the last couple of months! Maybe I’ll write some better documentation…
Bringing that into the applications that we’re most far along with. Our mineral structure app is now mostly rebuilt and a convenient ‘Sync’ button has been added to make it easy for instructors (see the above video). All the models are now on the remote server (which means a lot less laggy-ness at startup), and on that front, uploading new mineral structures is a doddle and requires no extra faffing around in Unity on my part. The outcrop application is likely going to be a little harder to implement what I’ve learned. However, that is in the process of being stripped down and built back up again better and more flexible that ever! I’ve also incorporated a couple of online repositories of imagery (NASA’s Global Imagery Browse Services) and data (USGS earthquake locations) so this should provide a much broader range of geologic and planetary information. By the time I’m done with them, they’ll feel much different and much more inclusive.
I think I’m probably going to have just gotten good at all this and then they’ll change everything…