Up until now I’ve been trying to just get content in to the HoloLens environment: producing 3D models in various guises and importing them in to Unity or throwing them up on to a server for the HoloLens to fetch.
The next stage is telling the world (think big?!) what we’re doing, and that means trying to display what is essentially a 1-person projector, to a whole room of people. Conveniently the HoloLens allows you to stream live what someone is seeing through the device over a WiFi connection. The tiny webcam-like camera sits on the HoloLens and combines an image with the holograms on display. It does a nice enough job for a quick demo, but it’s pretty lo-resolution and grainy so we should think about making something a little more polished.
About a year or so ago, a group of HoloLens develops came up with a way of solving this issue by upgrading the live stream optics by way of jury rigging a HoloLens on top of a DSLR camera. I had a look through the tutorials back then and chalked it off a something fun to do, but well above my competence level at the time. However, these last few months I’ve been chipping away at getting things set up and today… success!
Firstly I took stock of what we had and what was going to be any major expenditures. Thankfully the lab had bought a nice new DSLR (Nikon 5300) that we use for field work so no spending needed there. The lab desktop was lacking in an HDMI-in so we had to splash out on a capture card, we used the one listed on the Github site which had somewhat mixed reviews, but in hindsight I can say I’ve had no issues getting it up and running. The software SDKs needed were free downloads (woo!), so the final conundrum was going to be how to attach the HoloLens to the DSLR? The tutorial mention an elaborate setup involving both 3D printed and machined parts which I’m sure make for a very secure housing for the expensive HoloLens. Seemed a little over-elaborate for me… to the interwebs! I stumbled across this 3D printable model that I ended up sending to the library at Columbia:
This did the job nicely.
So, all the hardware is sorted, the capture card works well and live-streams through the HDMI-out from the camera to the desktop. Now we need to calibrate things as the software needs to composite images from 2 cameras (the DSLR and the HoloLens) that are in different locations. Calibration time!
This process spits out a text file that provides the transfer information for the images to be combined and overlain so the holograms appear in the right positions.
The final step is incorporating this within an application. I managed to work out how to share holograms a while back, this is a requirement for this approach although there are now multiple ways to share holographic environments now between HoloLenses. I’m going to use my trusty Virtual Earth application that has a hi-res image of the Earth overlain on a sphere and can be shared among a group for class uses. Miraculously, on firing everything up for the first time (well not the first time, as I didn’t read the instructions properly and booted things up in the wrong order…!) it worked! So many more pixels on my holographic image captures:
And it even works for movies too:
1080p’s of relief. Only took a few months of getting all these things put together and having the confidence to try it. Next job will be getting holograms to sync up so that the holograms will be much more dynamic on these videos!