A solution to saving Azure Spatial Anchor info

Just a quick one. In my last post I mentioned that the information required to share an Azure Spatial Anchor was added to either an ASP.NET web service which was a temporary solution that wiped itself after a few hours, or using Azure Cosmos DB which is a little pricey and a little unnecessary. Step forward NoSQL databases.

I had absolutely no idea what NoSQL database was last week, but after being recommended it from a colleague it seemed like a simple, no-frills solution to my problem. What’s more Google offers a free database on its Firebase site. Sweet.

What is NoSQL

A NoSQL (non-SQL or non-relational) database provides a stripped down way of storing and retrieving information that is not a series of relational tables (like those when using SQL). I got my head around it by thinking of a series of nested structures (like in MATLAB) or as a JSON file. I could POST, PUT and GET JSONs using a REST API (Google Firebase offers a unityPackage solution but that proved to be more trouble that it was worth) – which means it should be fairly straightforward to reprogram the AnchorExchanger.cs script used in the previous post.

Designing My NoSQL Database for ASA

There is essentially 3 pieces of important information I need to store:

  1. Anchor name – this is a memorable string defined by the user on creation, and can be alphanumeric
  2. GUID – this a unique string created when the anchor is made and is used by the Watcher for finding the anchor at a later time. It is a long string on numbers and letters and so isn’t very memory-friendly
  3. Anchor expiration date – when the anchor expires, could be a couple of hours, could be years! This is saved as a System.DateTime struct.

I also want to include a 4th which would be the anchor creation date just for reference and code checking. So all I need to do is create a list of “Anchor Classes” where each Anchor Class will be my “structure” with the fields in the list above, serialize it as a JSON and then PUT using a web request onto Firebase. PUT overwrites the NoSQL database, but I found this allows me to do some housekeeping on the database when I create a new anchor. Essentially I would download the database when I start a session, only read the viable anchors that were in-date, add a new anchor if necessary and then re-write a new JSON with only viable anchors that is PUT back up on the NoSQL database.

So here’s my new FirebaseExchanger.cs:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System;
using Newtonsoft.Json;
using System.Net;
using System.Text;
using UnityEngine.Networking;

public class FirebaseExchanger : MonoBehaviour
{
    /// <summary>
    /// Gets current stored anchor information, downloads, checks for expiration.
    /// On creating a new ID, checks for conflicting name and uploads to Firebase.
    /// </summary>

    //Private variables
    List<AzureSpatialAnchorObject> anchorObjects = new List<AzureSpatialAnchorObject>();
    bool conflictFound;

    //Public variables
    public string anchorName { get; set; } //this is set by the UI Input Field

    // Initial settings
    void Start()
    {
        conflictFound = false;
        StartCoroutine(FetchCurrentAnchors());
    }

    //Script to put (overwrite) the anchor selection on Firebase
    public void PutAnchors(string anchorIdentifier, DateTime expiration)
    {
        AzureSpatialAnchorObject anchorObject = new AzureSpatialAnchorObject();
        anchorObject.name = anchorName;
        anchorObject.identifier = anchorIdentifier; //Guid.NewGuid().ToString(); //This will be the ASA identifier
        anchorObject.dateCreated = DateTime.Now;
        anchorObject.dateExpired = expiration; //this is the time set by the ASA expiration

        anchorObjects.Add(anchorObject);
        var json = JsonConvert.SerializeObject(anchorObjects);

        var request = WebRequest.CreateHttp("https://<YOUR_DATABASE_NAME>.firebaseio.com/anchors.json");
        request.Method = "PUT";
        request.ContentType = "application/json";
        var buffer = Encoding.UTF8.GetBytes(json);
        request.ContentLength = buffer.Length;
        request.GetRequestStream().Write(buffer, 0, buffer.Length);
    }

    //Finds anchor name in stored anchor list
    public string FindAnchorByName()
    {
        string anchorToFind = null;
        foreach (var anchor in anchorObjects)
        {
            if (anchor.name == anchorName)
            {
                Debug.Log("Found " + anchor.name + ": " + anchor.identifier);
                anchorToFind = anchor.identifier;   
            }
        }
        return anchorToFind;
    }

    //Fetches the current list of anchor information on Firebase
    public IEnumerator FetchCurrentAnchors()
    {
        conflictFound = false;
        var uwr = UnityWebRequest.Get("https://<YOUR_DATABASE_NAME>.firebaseio.com/anchors.json");
        yield return uwr.SendWebRequest();

        //Continue if there are anchors stored, otherwise there's no point doing any more
        if (uwr.downloadHandler.text != "null")
        {
            List<AzureSpatialAnchorObject> downloadedAnchors = JsonConvert.DeserializeObject<List<AzureSpatialAnchorObject>>(uwr.downloadHandler.text);
            //print(downloadedAnchors.Count + "Anchors Stored on Firebase");
            foreach (var anchor in downloadedAnchors)
            {
                //Check if anchor has expired - if it has it's not added to the anchorObjects list and so when a new list is uploaded it won't be included
                if (anchor.dateExpired > DateTime.Now)
                {
                    anchorObjects.Add(anchor);
                }
            }
        }
    }

    public bool CheckForNameConflict(string potentialName)
    {
        conflictFound = false;

        foreach (var anchor in anchorObjects)
        {
            if (anchor.name == potentialName)
            {
                conflictFound = true;
            }
        }

        return conflictFound;
    }

    public bool CheckIfNameExists(string inputName)
    {
        bool nameExists = false;
        foreach (var anchor in anchorObjects)
        {
            if (anchor.name == inputName)
            {
                nameExists = true;
            }
        }

        return nameExists;
    }

	//Anchor class
	public class AzureSpatialAnchorObject
	{
		public string name { get; set; }
		public string identifier { get; set; }
		public DateTime dateCreated { get; set; }
		public DateTime dateExpired { get; set; }
	}
}

This script also has a couple of methods that deal with conflicts of the same user defined anchor name, and whether an anchor name exists in the database. Although these aren’t used in this script, they are used elsewhere in the project.

The PutAnchors method adds an entry to the anchorObjects list (which is created at the start of the session if anchor information already exists in our database), and creates an object that looks like this at https://<YOUR_DATABASE_NAME&gt;.firebaseio.com/anchors.json:

A conveniently placed coffee table was used for the location of this anchor

This anchor will be available to be downloaded now for 1 day and can be found in an app by typing in “coffee” for the anchorName string in the FirebaseExchanger.

Seems to work! Nice 🙂

Stripping down Azure Spatial Anchors

It’s been a while… and this is a long one!

Quite a bit has changed since my last post. We got a dog. Moved to DC. Career moves. But still developing AR apps!

Our proud pooch!

Over the last year, pretty much since the Azure Spatial Anchors resource was announced, I’ve dabbled in using them for my augmented reality applications. But up until now I’ve mostly hacked and bastardized pre-written code for my own needs, as opposed to really getting down to it and understanding the resource from the ground up. No more!

What are Azure Spatial Anchors?

Simply put, an anchor is a specific point in 3D-space that objects can be placed in reference to. Think of it like an origin point. What makes an Azure Spatial Anchor (ASA) is tying that origin point to a surrounding set of reference points based on stationary, real-world features (such as table edges, color contrasts, sofas, sleeping dogs, etc.) that end up being relatively unique. This information, the real-world feature points and the anchor position in relation to them, gets uploaded to the cloud. If it is then downloaded by a user, it gets compared to the user’s surroundings and if there’s a match it can place the anchor in its correct position. More importantly this information can be used on multiple AR platforms: iPhone, Android and HoloLens – so it’s a really powerful tool for AR shared experiences, allowing multiple devices to experience AR content in the same place!

So what’s been the problem?

A number of things: time, need… but mostly the documentation, for a non CS-major like myself, isn’t the clearest. Furthermore, the examples to try for yourself are now a little old, and haven’t been updated to include the latest Mixed Reality Toolkit (MRTK) features that we use so regularly in HoloLens app development (such as recent MRTK compatibility with iPhone and Android), in fact the first samples I hacked about with didn’t include MRTK at all! The samples also catered to every possible type of use situation from the same set of scripts which meant they were a little cumbersome and often layered on top of each other, which was a little confusing for a home-schooled coder.

What I really wanted was a tutorial that set up ASA from an app already set up, from installing the AzureSpatialAnchors.SDK UnityProject file, to getting it working. Simply put, what I wanted was to be able to:

  1. Create an anchor based on a real-world point
  2. Upload that anchor as an ASA
  3. Close my app
  4. Re-open my app and find that ASA, placing it in the same position

Setting up the cloud services

As the name suggests, ASA requires a service to be set up on Microsoft’s Azure cloud services platform. This Azure Spatial Anchors resource (which at the time of writing is still in Free Preview) can be set up following the first section of the documentation here. The important numbers to remember after setting the resource up are the Account ID, and the Account Primary Key.

The other service that will need to be set up is a method of being able to share any created ASA identifiers (a GUID) across platforms. This can be done in the form of an online table where one column would be the GUID (that is randomly generated) and the other would be some sort of number/string that can be entered by the user (that is not randomly generated). There are 2 methods for doing this listed in the ASA documentation. The first is an ASP.NET core web app that can be set up following the steps here. You’ll need to download the sample project (that also contains some example Unity projects which we’re ignoring for the purposes of this tutorial) in order to deploy the web app. The ASP.NET web app works well enough, but the references to the anchors get wiped within a few hours, so in order to have something persist longer you will need to set up an Azure Cosmos Database following the steps outlined here. I’m currently testing this out, but it appears to cost ~$0.032 per hour so it may not be the best solution for just mucking around with ASA. When done with whichever option, your app service can be reached at a URL like “https://<YOUR_NAME&gt;.azurewebsites.net/api/anchors”

Example of running Azure Cosmos DB. The anchor info is stored in a table, RowKey is the number a user can use to call the AnchorKey required to locate an ASA

Setting up Unity

Ok, first up I want this test app to be based on current MRTK features, and also as I don’t have a HoloLens with me to develop on it means I’m stuck using my iPhone. Should be a good test of Microsoft’s cross-platform abilities!

First, following this document: add MRTK 2.2.0. And also make sure to import the latest MRTK.Providers unitypackage as well as this contains the camera settings needed to work on mobile AR devices. Make sure that ARFoundation, ARCore and ARKit packages are on their correct versions using Unity’s Package Manager.

I then added the latest AzureSpatialAnchors.SDK from this UnityPackage link.

With these packages installed, I created a new scene “AzureSpatialAnchors” and added MRTK to it using the menu Mixed Reality Toolkit > Add to Scene and Configure… , set the camera settings then made a couple of additions to the MixedRealityPlayspace gameobject. MRTK will add necessary ARFoundation scripts at runtime but I wanted to make sure that ARSessionOrigin was set to the the MixedRealityPlayspace gameobject, as well as ARRaycastManager, ARReferencePointManager (necessary for ASA, you don’t need to add anything for the Reference Point Prefab), and the ARPlaneManager.

MixedRealityPlayspace components

ARDefaultPlane is a prefab that contains ARPlane, ARPlaneMeshVisualizer, plane scripts and a line renderer. These prefabs will be drawn when it detects a flat surface (like a table), and we can interact with them.

ARDefaulPlane prefab

User Interaction (UI)

Now to set up some basic interactions. I essentially want to have two workflows, one to create a new anchor, and one to find an already created anchor. I added two UI buttons on a canvas within the “MixedRealityPlayspace” gameobject (plus one in the case where I don’t want to do either of those things… it can happen!) “Create Anchor” and “Find Anchor”, and wired them up to a RoomManager script where I handle all the UI interactions. I also added a UI text box to display debug text while the app is running. “Find Anchor” opens up a couple of extra UI objects where the user can enter an anchor number and then press a button to go find that anchor.

RoomManager was mostly written for a different app that I want to add ASA to, so it contains a bit of script that isn’t used in this app, but I wanted the setup to be similar, here’s what needs to be used:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using TMPro;
using Microsoft.MixedReality.Toolkit.Input;
using Microsoft.MixedReality.Toolkit;
using UnityEngine.XR.ARFoundation;
using Microsoft.Azure.SpatialAnchors.Unity;

public class RoomManager : MonoBehaviour, IMixedRealityPointerHandler
{
public GameObject createAnchorButton;
public GameObject findAnchorButton;
public GameObject continueButton;
public GameObject inputTextBox;
public GameObject startFindingAnchorButton;
public GameObject anchorObject;
public Pose anchorPose;
public string anchorNumber { get; set; }

ARRaycastManager raycastManager;
GameObject newAnchorObject;

bool isPlacing;
bool anchorValid;

// Start is called before the first frame update
void Start()
{
#if UNITY_IOS
     //Find the raycastManager in the scene
     raycastManager = FindObjectOfType<ARRaycastManager>();
#endif
}

// Update is called once per frame
void Update()
{
    //this called when creating an anchor
    if (isPlacing)
    {
//place an anchor object on a plane, from a ray cast from the center of the screen
var screenCenter = Camera.current.ViewportToScreenPoint(new Vector3(0.5f, 0.5f));
        var hits = new List<ARRaycastHit>();
        raycastManager.Raycast(screenCenter, hits, UnityEngine.XR.ARSubsystems.TrackableType.PlaneWithinPolygon);

        anchorValid = hits.Count > 0;
        //if the ray hits an plane created by ARPlaneManager, place the anchorObject prefab
        if (anchorValid)
        {
            //get the pose of the hit position
            anchorPose = hits[0].pose;

            newAnchorObject.GetComponentInChildren<Renderer>().enabled = true;

            var cameraForward = Camera.current.transform.forward;
            var cameraBearing = new Vector3(cameraForward.x, 0, cameraForward.z).normalized;
            anchorPose.rotation = Quaternion.LookRotation(cameraBearing);

            newAnchorObject.transform.SetPositionAndRotation(anchorPose.position, anchorPose.rotation);
            //now lets confirm the anchor position by tapping the screen
            if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began)
            {
                isPlacing = false;
                newAnchorObject.GetComponentInChildren<Renderer>().material.color = Color.yellow;
                newAnchorObject.GetComponent<SpatialAnchorManager>().enabled = true;
                newAnchorObject.AddComponent<CreateASA>();   //This is the script that handles the creation and uploading of the ASA
                newAnchorObject.GetComponent<CreateASA>().feedback = directionText;
            }
        }
        else
        {
            newAnchorObject.GetComponentInChildren<Renderer>().enabled = false;
        }
    }
}

//Called when "Create Anchor" button is tapped
public void OnCreateSelected()
{
    createAnchorButton.SetActive(false);
    findAnchorButton.SetActive(false);
    continueButton.SetActive(false);
    directionText.text = "Place anchor location...";
    isPlacing = true;  //place the anchor
    anchorValid = false;

    //Lets go create an anchor position
    newAnchorObject = Instantiate(anchorObject);
}

//Called when "Find Anchor" button is tapped
public void OnFindSelected()
{
    createAnchorButton.SetActive(false);
    findAnchorButton.SetActive(false);
    continueButton.SetActive(false);

    //Open the input text box to allow anchor number to be entered
    directionText.text = "Enter anchor number:";
    inputTextBox.SetActive(true);
    startFindingAnchorButton.SetActive(true);
}

//After a number has been entered, goes and finds the ASA associated with that number
public void OnStartFindingSelected()
{
    inputTextBox.SetActive(false);
    startFindingAnchorButton.SetActive(false);
    print("finding anchor #: " + anchorNumber);

    //Create an anchor prefab but make it invisible by turning off the renderer
    newAnchorObject = Instantiate(anchorObject);
    newAnchorObject.GetComponentInChildren<Renderer>().enabled = false;
    newAnchorObject.GetComponent<SpatialAnchorManager>().enabled = true;
    newAnchorObject.AddComponent<FindASA>();   //This script goes and finds the anchor and plots it
    newAnchorObject.GetComponent<FindASA>().anchorNumber = anchorNumber;
    newAnchorObject.GetComponent<FindASA>().feedback = directionText;
}

//Not used in this app, but useful for others when you don't want to create or find an anchor
public void OnContinueSelected()
{
    createAnchorButton.SetActive(false);
    findAnchorButton.SetActive(false);
    continueButton.SetActive(false);
    directionText.text = "Place menu location...";
    isPlacing = true;
    anchorValid = false;

    newAnchorObject = Instantiate(anchorObject);
}

The UI needs wiring up to the RoomManager script using the UnityEvents in each button’s Inspector:

One last thing we need to do before creating an ASA is providing the user with a visual reference for the anchor position. We need an anchor prefab. This can be any object but for simplicity’s sake I made it a cube. The actual cube object sits within an empty gameobject and is positioned so that the cube will appear to sit on top of a horizontal surface. To the parent gameObject I added the Spatial Anchor Manager from the AzureSpatialAnchors.SDK, and as an additional child I added a 3D text box so that the anchor name (which will be an integer) can be displayed. The AnchorCube prefab is added to the RoomManager script.

Creating an ASA

Now the basic UI is set up we can start making anchors! We need to have some way of talking to the app service we set up earlier to store the anchor information on the cloud for another device to have access to. This is done through the AnchorExchanger script that is bundled with the ASA sample codes:

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT license.
using System;
using System.Net.Http;
using System.Threading.Tasks;
using UnityEngine;
using System.Collections.Generic;

public class AnchorExchanger
{
//#if !UNITY_EDITOR
		private string baseAddress = "";
		private List<string> anchorkeys = new List<string>();
		public List<string> AnchorKeys
		{
			get
			{
				lock (anchorkeys)
				{
					return new List<string>(anchorkeys);
				}
			}
		}

		public void WatchKeys(string exchangerUrl)
		{
			baseAddress = exchangerUrl;
			Task.Factory.StartNew(async () =>
			{
				string previousKey = string.Empty;
				while (true)
				{
					string currentKey = await RetrieveLastAnchorKey();
					if (!string.IsNullOrWhiteSpace(currentKey) && currentKey != previousKey)
					{
						Debug.Log("Found key " + currentKey);
						lock (anchorkeys)
						{
							anchorkeys.Add(currentKey);
						}
						previousKey = currentKey;
					}
					await Task.Delay(500);
				}
			}, TaskCreationOptions.LongRunning);
		}

		public async Task<string> RetrieveAnchorKey(long anchorNumber)
		{
			try
			{
				HttpClient client = new HttpClient();
				return await client.GetStringAsync(baseAddress + "/" + anchorNumber.ToString());
			}
			catch (Exception ex)
			{
				Debug.LogException(ex);
				Debug.LogError($"Failed to retrieve anchor key for anchor number: {anchorNumber}.");
				return null;
			}
		}

		public async Task<string> RetrieveLastAnchorKey()
		{
			try
			{
				HttpClient client = new HttpClient();
				return await client.GetStringAsync(baseAddress + "/last");
			}
			catch (Exception ex)
			{
				Debug.LogException(ex);
				Debug.LogError("Failed to retrieve last anchor key.");
				return null;
			}
		}

		internal async Task<long> StoreAnchorKey(string anchorKey)
		{
			if (string.IsNullOrWhiteSpace(anchorKey))
			{
				return -1;
			}

			try
			{
				HttpClient client = new HttpClient();
				var response = await client.PostAsync(baseAddress, new StringContent(anchorKey));
				if (response.IsSuccessStatusCode)
				{
					string responseBody = await response.Content.ReadAsStringAsync();
					long ret;
					if (long.TryParse(responseBody, out ret))
					{
						Debug.Log("Key " + ret.ToString());
						return ret;
					}
					else
					{
						Debug.LogError($"Failed to store the anchor key. Failed to parse the response body to a long: {responseBody}.");
					}
				}
				else
				{
					Debug.LogError($"Failed to store the anchor key: {response.StatusCode} {response.ReasonPhrase}.");
				}

				Debug.LogError($"Failed to store the anchor key: {anchorKey}.");
				return -1;
			}
			catch (Exception ex)
			{
				Debug.LogException(ex);
				Debug.LogError($"Failed to store the anchor key: {anchorKey}.");
				return -1;
			}
		}
//#endif
}

To create and upload our ASA I wrote this CreateASA script:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Microsoft.Azure.SpatialAnchors;
using Microsoft.Azure.SpatialAnchors.Unity;
using System;
using System.Threading.Tasks;
using TMPro;

public class CreateASA : MonoBehaviour
{
    //The feedback text
    public TextMeshProUGUI feedback;
    //the cloud anchor that will be uploaded
    CloudSpatialAnchor currentCloudAnchor;
    AnchorExchanger anchorExchanger = new AnchorExchanger();

    async void Start()
    {
        //Firstly, add a CloudNativeAnchor component to the anchorPrefab
        this.gameObject.AddComponent<CloudNativeAnchor>();
        //Then lets include the app service URL in the Anchor Exchanger
        anchorExchanger.WatchKeys("https://<YOUR_NAME>.azurewebsites.net/api/anchors");
        await Initialize();
    }

    public async Task Initialize()
    {
        //Start a ASA session, this starts capturing frames and detecting reference points
        await GetComponent<SpatialAnchorManager>().StartSessionAsync();
        feedback.text = "Started Session";

        //Now lets do the heavy lifting
        await SaveCurrentObjectAnchorToCloudAsync();

        long anchorNumber = -1;

        //Finally lets save the ASA information to the app service
        anchorNumber = (await anchorExchanger.StoreAnchorKey(currentCloudAnchor.Identifier));

        //Apply the anchor name to the anchorObject prefab so there's a visual feedback
        GetComponentInChildren<TextMeshPro>().text = anchorNumber.ToString();

        //And stop the ASA session
        GetComponent<SpatialAnchorManager>().StopSession();
        feedback.text = "Stopped Session";

    }


    protected virtual async Task SaveCurrentObjectAnchorToCloudAsync()
    {
        CloudNativeAnchor nativeAnchor = this.GetComponent<CloudNativeAnchor>();
        //Set the pose of the anchor to that of the anchorObject prefab
        nativeAnchor.SetPose(this.transform.position, this.transform.rotation);

        // If the cloud portion of the anchor hasn't been created yet, create it
        if (nativeAnchor.CloudAnchor == null) { nativeAnchor.NativeToCloud(); }
        
        //Convert the native cloud anchor to a cloud spatial anchor
        CloudSpatialAnchor cloudAnchor = nativeAnchor.CloudAnchor;

        //Set it's expiration - in this case 1 week from now
        cloudAnchor.Expiration = DateTimeOffset.Now.AddDays(7);
        
        //In case the real-world scanning doesn't yet have enough reference points, scan some more before uploading, providing user feedback
        while (!GetComponent<SpatialAnchorManager>().IsReadyForCreate)
        {
            await Task.Delay(330);
            float createProgress = GetComponent<SpatialAnchorManager>().SessionStatus.RecommendedForCreateProgress;
            feedback.text = $"Move your device to capture more environment data: {createProgress:0%}";
        }

        //Just to double check the position and rotation
        Pose anchorPose = cloudAnchor.GetPose();
        feedback.text = "Anchor Position: " + anchorPose.position + " Rotation: " + anchorPose.rotation;

        //Now lets upload
        try
        {
            // Actually save
            await GetComponent<SpatialAnchorManager>().CreateAnchorAsync(cloudAnchor);
            feedback.text = "Saved: " + cloudAnchor.Identifier;
            // Store for app service use
            currentCloudAnchor = cloudAnchor;
        }
        catch (Exception ex)
        {
            feedback.text = ex.ToString();
        }
    }
}

Finding an ASA

The fundamental part of finding an anchor is creating what is called a Watcher. This runs in the background, using the reference points collected by the Spatial Anchor Manager and tries to correlate them to an anchor, or anchors, saved on the cloud. After a Watcher has started, if an anchor is found then the SpatialAnchorManager.AnchorLocated event will fire. You can then start using that anchor position to build a world around you that will appear in the same place as another user.

Here’s my FindASA script:

using System.Collections;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.SpatialAnchors;
using Microsoft.Azure.SpatialAnchors.Unity;
using TMPro;
using UnityEngine;

public class FindASA : MonoBehaviour
{
    public TextMeshProUGUI feedback;
    CloudSpatialAnchor currentCloudAnchor;
    AnchorExchanger anchorExchanger = new AnchorExchanger();
    //This is set from the Input Text Box
    public string anchorNumber;
    protected AnchorLocateCriteria anchorLocateCriteria = null;
    CloudSpatialAnchorWatcher currentWatcher;

    // Start is called before the first frame update
    async void Start()
    {
        anchorExchanger.WatchKeys("https://<YOUR_NAME>.azurewebsites.net/api/anchors");

        //Here's what we do when an anchor is located: run the CloudAnchor_Located method
        GetComponent<SpatialAnchorManager>().AnchorLocated += CloudAnchor_Located;
        anchorLocateCriteria = new AnchorLocateCriteria();

        await Initialize();
    }

    public async Task Initialize()
    {
        await GetComponent<SpatialAnchorManager>().StartSessionAsync();
        feedback.text = "Started Session";

        //Lets fetch the GUID associated with the anchorName from the Input Text Box
        string _anchorKeyToFind = await anchorExchanger.RetrieveAnchorKey(long.Parse(anchorNumber));
        if (_anchorKeyToFind == null)
        {
            feedback.text = "Anchor Number Not Found!";
        }

        //Set up an anchorIdsToLocate list and add it to an AnchorLocateCriteria class
        List<string> anchorsToFind = new List<string>();
        List<string> anchorIdsToLocate = new List<string>();
        anchorsToFind.Add(_anchorKeyToFind);
        anchorIdsToLocate.AddRange(anchorsToFind);
        anchorLocateCriteria.Identifiers = new string[0];
        anchorLocateCriteria.Identifiers = anchorIdsToLocate.ToArray();

        feedback.text = "Anchor key to find: " + _anchorKeyToFind;

        //Now let's create the watcher for that anchor GUID
        GetComponent<SpatialAnchorManager>().Session.CreateWatcher(anchorLocateCriteria);
        feedback.text = "Watcher started...";

    }


    private void CloudAnchor_Located(object sender, AnchorLocatedEventArgs args)
    {
        //We found an anchor! Lets get the pose and place an AnchorObject prefab at that position:
        feedback.text = "Anchor " + anchorNumber + " located";
        currentCloudAnchor = args.Anchor;
        Pose anchorPose = currentCloudAnchor.GetPose();
        feedback.text = "Anchor position: " + anchorPose.position;
        this.transform.SetPositionAndRotation(anchorPose.position,anchorPose.rotation);
        this.GetComponentInChildren<Renderer>().enabled = true;
        this.GetComponentInChildren<Renderer>().material.color = Color.green;
        //And label it with its anchor number
        this.GetComponentInChildren<TextMeshPro>().text = anchorNumber;

        //Finally let's stop the ASA session
        GetComponent<SpatialAnchorManager>().StopSession();
        feedback.text = "Stopped Session";
    }
}

Ok let’s see all this in action. First let’s create a new ASA, upload it, and shut down the app:

Here it is now on our app service table, RowKey: 3

And now let’s download and place the anchor in a completely separate session:

So that seems to work how I want it to, which is great! And once the Watcher starts, it finds the anchor very quickly. The Unity app (with all my custom cloud services removed) is up on my GitHub here.

There’s still much to do, being able to update the pose of the anchor, and deleting unwanted anchors, but the basics are here. It’s been a long haul this one, sorry for the long post, but I really wanted to get this down in writing before I forgot about it. Here’s another pic of the hound for your troubles 🙂

HiRISE and shine!

About a month ago I spotted on twitter a link to a website AreoBrowser. The site showed off the open digital terrain models (DTMs) generated by the Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment (HiRISE) and open to download from the Univeristy of Arizona HiRISE website. AreoBrowser looked like a really useful tool to browse the HiRISE catalog – but wouldn’t it be great to see these models in the HoloLens? After a quick conversation with the creater of AreoBrowser, Matt Brearley, I came up with a plan for how to get all 557 models downloaded, process the meshes to a usable number of triangles, and finally converted to a format that could be served up wirelessly to a HoloLens application. These should really be in the format of Unity AssetBundles as these are compressed files that can be loaded quickly at runtime on the HoloLens.

A HiRISE DTM

The first step, downloading the data, should be straightforward enough. Only the mesh data needs to be downloaded, the texture jpegs could stay online and can be referenced from within the HoloLens app. A wget command should do the trick.

Luckily the HiRISE team are up on visualizations and they’ve developed an add-on to Blender to allow for the import of the .IMG DTM files. Once imported in to Blender it was clear that these meshes were going to be way too big for the HoloLens’s limited processing power. Not to worry, a quick decimation and a good quality texture should make things look not too bad. After a brief foray into Blender python coding, all the .IMG files were decimated to a similar level and converted to .blend files which can be read by Unity. I probably should have used .OBJ … but here we are.

The .blend files are fine, but they’re uncompressed and take a little while to load into Unity at runtime (after the app has already started), and besides it would require using some third-party code. Preloading 500+ models is really not feasible (we’re still at GBs of data), the app would take forever to start. Bad idea. Unfortunately, bulk file conversion and compression isn’t really what Unity is made for. I wasn’t really ready to go through each model individually, add a unique filename, and compress each file in to a Unity AssetBundle. Thankfully, there are enough programmable steps that can be hacked together to import, save and compress the models which will make things more manageable.

Great, all the models are processed and can be uploaded to a public server (we’re upgrading to Microsoft Azure storage) ready to be loaded up on the HoloLens, and in particular, our public GeoXplorer application. Up until now I’ve really only had a few models to serve up, and they’ve been fairly well-spaced around the Earth. Now we’re dealing with a few-hundred models from a whole other planet. It maybe time to rethink the user interface.

UI Change

I had thought about doing something similar to AreoBrowser and have a search field, but typing on the HoloLens is nigh-on impossible and certainly not ideal for repeat interactions. Besides, being a spatial visualization, it would be a lot nicer to have the icons represented in a spatial coordinate rather than in a list. It’s a bonus that the HiRISE team also provides the latitude and longitude of all the DTMs on their website. This made it straightforward enough to add small icons to locations on a textured sphere that represents Mars in GeoXplorer. But it doesn’t make selecting an individual icon sphere very easy. What may work is selecting multiple icons within an area around the HoloLens cursor. This would limit it to around half a dozen models which can then be displayed as a list.

OK, there’s still a list, but we can maybe improve on this!

20180918_180701_HoloLens
Each DTM has its own sphere icon placed at its location on Mars

20180918_180723_HoloLens
The cursor gains a circle around it to select a patch of icons, now in blue

20180918_180737_HoloLens
On selection of a patch, a set of clickable text appears with the title of each DTM

20180918_180908_HoloLens
Selecting a DTM, the app downloads a mesh, and satellite image texture to overlay. This can be moved, scaled and rotated using two-handed manipulation

20180918_180928_HoloLens
Alternatively the app can overlay a colored altimetry map of the DTM area showing the vertical scale of the terrain

A video outline these interactions is embedded below:

There are a few issues, notably that the meshes are still a little large (triangle count-wise) and they could be made a little smoother as well. Also when they’re imported into Unity, the origin point is not the center of the model but at the corner. This is best for the texture but not for when it comes to manipulation, it feels like the model is attached to something in the corner and that’s not very intuitive.

GeoXplorer will be updated in the coming weeks.

Gypsum crystals the size of the room!

One of the functions of the Fossett Lab is to provide support and development for the Earth Science labs at WashU. This has been somewhat on the back-burner until we knew the capabilities of the devices we had, and had our own infrastructure in place to cater adequately to researchers. I think I’ve mentioned before that the space we inhabit (it’s a windowless room that was painted black… so welcoming) used to house a CAVE. The CAVE was 3 walls and a floor, each about 3m-square that had 3D projectors, the ability to have a few people stand around and look at 3D data, and would often breakdown. Granted, our HoloLenses are pretty buggy and by no means plug-and-play, but at least we don’t have to spend a vast amount on service fees! The CAVE was there to help (primarily planetary) scientists look collaboratively at DEMs (Digital Elevation Models) and other GIS data. So over the last few months I’ve been developing a flexible HoloLens application that will not only display DEMs in a shared environment, but any 3D data a scientist would want to look at.

The first version was pretty simple, and it was basically a clone of the SharingSpawnText included in Microsoft’s HoloToolkit release. We can load a few models, I think keeping it down to 4 does make it a little more manageable, the positions and rotations are shared between devices so the model looks the same for everyone so it almost looks like what the CAVE used to do for DEMs except now we have way more freedom to explore. The video above gives an example where myself and 2 professors from WashU are discussing the Martian DEM in front of us.

So that’s all very well. But it would be so much nicer to have any model, not just the 3 or 4 that are preloaded with the application. So this brings in something similar to my previous post ‘Sharing is Caring’ when I’m loading data from a remote server and sharing it among HoloLens users. Except now I’m dealing with complex mesh surfaces, materials and textures. Thankfully Unity makes things a little easier for me with AssetBundles. These are compressed models that contain each of those three components and contains them all into a single file that can be served up from a web server. Ok sounds straightforward…

I’m not going to go into the nitty-gritty with this, but suffice it to say that it’s been a bit of a struggle. There are still a few bugs to smush, but I think we’re pretty close to essentially being able to serve up a model that the application can read on the fly, download, share, and manipulate. The Fossett Lab has had an undergraduate summer intern working for the last few weeks documenting some of the more precious mineral samples in our collection, even stuff that we don’t have on display. She’s done this using photogrammetry (pretty much like my post ‘Behind the Scenes: HoloLens Photogrammetry’) which has added a few complications with some of the shiny/translucent samples but we now have about 30 good-looking models. As there’s so many it really means we can’t preload them into the app, and remembering 30 distinct voice commands seems a little unreasonable so I coded up a menu of buttons that automatically gets filled at startup. Here’s me and Anna testing things out:

One of the initial two samples (the pink one) is gypsum in its ‘desert rose’ morphology which, as you can imagine, is pretty fragile and you don’t really want to be handling it too much. But now in AR we can blow up crystals to the size of the room, or even throw them across the room if we want!

Now I just need to make the interface a little more usable, and then allow access to all those outcrop models I’ve made, oh and all the DEMs, and atomic structures, and protein molecules… and there’s probably so many other things as well!

 

Sharing is caring

This has been a busy couple of months, a lot of time in February was spent on a plane and as such it felt like I was getting behind on a bunch of things. It really took a couple of weeks to get back into the swing of things and pushing forward again.

The Fossett Lab at WashU is making bigger strides now. I’m into my 2nd year there and it feels like we can start reliably running classes for small groups. We’ve had a couple of mineralogy labs in, and the undergrads that went to the Azores did some testing of how well our outcrop models may help (or not!). We have a CS undergrad, who is kindly working for us off of a fellowship, that is making great strides at getting our web platform up and running and integrated with our HoloLens apps. Microsoft have also been involved, helpfully sending 2 techs to see us and discuss what they can do for us (probably quite a lot given their resources). It has also been a time to reflect on the work I’ve done in building the applications. They were really a first attempt, layered on with gradually better written codes and graphics. Best to break things down and rebuild.

Learning mineral structures from a different point of view

Prepping for a trip to the Azores

Well mostly, there are some good things to keep. But there is one glaring omission: sharing holographic experiences. It’s a part of the AR experience that is key for any instructor wanting to guide students along through a set of examples or tasks. So far we had been able to make sure that the holograms  appeared in the same place for everyone with a HoloLens on, but there was no communication between the devices. If 1 person clicked on a button, the rest would have to do the same. Wouldn’t it be better if the instructor could click a button and it automatically clicked for everyone? Or even rotating and moving holograms – this has been elusive.

Up till now….

The way we have to set up our applications is that a lot of the three-dimensional models that will appear as holograms are stored not on each HoloLens, but on a remote server that the HoloLens can reach with a simple URL (just like typing in an address in a web browser). My thinking was always that I could download the model onto 1 HoloLens, and then broadcast the model onto the rest of the connected headsets. This is a really inefficient way of doing what I wanted, and besides, without having to disable firewalls and other security protections it would be impossible to do that. Materials and textures would have to be copied across and synced up, along with the positions of many parts to what can be fairly complex holograms.

Things were compounded by any documentation or help for sharing holograms being pretty thin on the ground. There are some examples on Microsoft’s web pages but really it was a pretty straightforward case, and it isn’t clear how to adapt the codes for my own use. As such I’d put off, and put off…. and put off really tackling this problem.

I finally figured that instead of doing what I had planned, I could just send a signal (a simple ‘yes go download this’ or ‘go download that’) to all the HoloLenses connected to download the model separately and put that model within another object that I could share the position and (if needed) rotation of. When I say ‘within another object’, that object can be completely transparent, blank, nothing, nada but a singular point that has a position, rotation and size in three-dimensional space. I feel I may have lost some people! But suffice it to say, that I think this should solve all my problems. I’ve worked out how to share simple boolians (either true or false) which is great for signalling whether or not a button has been pressed and from that I can signal that a model should be downloaded. This may seem like baby steps for coding experts, but it seems like I’ve learned a lot in the last couple of months! Maybe I’ll write some better documentation…

Bringing that into the applications that we’re most far along with. Our mineral structure app is now mostly rebuilt and a convenient ‘Sync’ button has been added to make it easy for instructors (see the above video). All the models are now on the remote server (which means a lot less laggy-ness at startup), and on that front, uploading new mineral structures is a doddle and requires no extra faffing around in Unity on my part. The outcrop application is likely going to be a little harder to implement what I’ve learned. However, that is in the process of being stripped down and built back up again better and more flexible that ever! I’ve also incorporated a couple of online repositories of imagery (NASA’s Global Imagery Browse Services) and data (USGS earthquake locations) so this should provide a much broader range of geologic and planetary information. By the time I’m done with them, they’ll feel much different and much more inclusive.

I think I’m probably going to have just gotten good at all this and then they’ll change everything…

Getting adventurous…

Up until now I’ve been trying to just get content in to the HoloLens environment: producing 3D models in various guises and importing them in to Unity or throwing them up on to a server for the HoloLens to fetch.

The next stage is telling the world (think big?!) what we’re doing, and that means trying to display what is essentially a 1-person projector, to a whole room of people. Conveniently the HoloLens allows you to stream live what someone is seeing through the device over a WiFi connection. The tiny webcam-like camera sits on the HoloLens and combines an image with the holograms on display. It does a nice enough job for a quick demo, but it’s pretty lo-resolution and grainy so we should think about making something a little more polished.

About a year or so ago, a group of HoloLens develops came up with a way of solving this issue by upgrading the live stream optics by way of jury rigging a HoloLens on top of a DSLR camera. I had a look through the tutorials back then and chalked it off a something fun to do, but well above my competence level at the time. However, these last few months I’ve been chipping away at getting things set up and today… success!

Firstly I took stock of what we had and what was going to be any major expenditures. Thankfully the lab had bought a nice new DSLR (Nikon 5300) that we use for field work so no spending needed there. The lab desktop was lacking in an HDMI-in so we had to splash out on a capture card,  we used the one listed on the Github site which had somewhat mixed reviews, but in hindsight I can say I’ve had no issues getting it up and running. The software SDKs needed were free downloads (woo!), so the final conundrum was going to be how to attach the HoloLens to the DSLR? The tutorial mention an elaborate setup involving both 3D printed and machined parts which I’m sure make for a very secure housing for the expensive HoloLens. Seemed a little over-elaborate for me… to the interwebs! I stumbled across this 3D printable model that I ended up sending to the library at Columbia:

Turns out most universities have a 3D printer lying around and they’ll print things like this for free!

This did the job nicely.

So, all the hardware is sorted, the capture card works well and live-streams through the HDMI-out from the camera to the desktop. Now we need to calibrate things as the software needs to composite images from 2 cameras (the DSLR and the HoloLens) that are in different locations. Calibration time!

After taking about 50 photos like this, you’d look bored too…

This process spits out a text file that provides the transfer information for the images to be combined and overlain so the holograms appear in the right positions.

The final step is incorporating this within an application. I managed to work out how to share holograms a while back, this is a requirement for this approach although there are now multiple ways to share holographic environments now between HoloLenses. I’m going to use my trusty Virtual Earth application that has a hi-res image of the Earth overlain on a sphere and can be shared among a group for class uses. Miraculously, on firing everything up for the first time (well not the first time, as I didn’t read the instructions properly and booted things up in the wrong order…!) it worked! So many more pixels on my holographic image captures:

Before just using the HoloLens’s ‘webcam’

After using the DSLR composite from exactly the same position. The hologram is way more solid-looking and much higher resolution – plus the background is much clearer.

The outcrops look pretty good too: here’s one created from drone imagery

And it even works for movies too:

1080p’s of relief. Only took a few months of getting all these things put together and having the confidence to try it. Next job will be getting holograms to sync up so that the holograms will be much more dynamic on these videos!

~M

1 year in…

The last couple of months have been a little busy. From back to working on some research from the Whillans Ice Stream in Antarctica, to continuing the development of the Fossett Labs applications, it’s been a bit hectic. Throw in a SeismoDome planetarium show (which was a hit and we may have some shows outside of NYC) and there hasn’t been all that much time to write any blogposts!

SeismoDome at the Hayden Planetarium

The annual AGU meeting in New Orleans is upon us and preparation is well underway for presenting a demonstration of the Fossett Lab’s work for the scientific community. Given the expected unreliability of the WiFi at AGU, I’ve been putting together a standalone demo app that should showoff a variety of different topics we’ve been working on: outcrops, atomic structures and earthquake catalogs. We’re hoping that the community will be generous enough to help us grow the content as the HoloLens-side application is pretty much ready for it.

Also in prep for AGU, my GitHub site has gotten some much-needed attention and I’m starting to upload a few of the applications. It’s still pretty rough but the essentials will be up there. I’ve also bought business cards for the first time ever…

This last year has been a lot of learning for me. Designing user experiences (UX) was not really part of my Ph.D. but it has been a lot of fun. It’s also been mostly solo work and a lot of self-learning, so it’s a relief to introduce a couple of undergraduates into the lab in the new year: one working on importing the Rudolph rock collection into a holographic environment, and one working more closely with me the design of Earth Science applications (and hopefully fixing a lot of my poorly-written code!). Bring on 2018!