blog

By
Anjney Midha
May 16, 2018

Leigh Stewart: Infrastructure at Ubiquity6

This is the first of a series of blog posts featuring some of the awesome talent we have working on the technology behind Ubiquity6.

This week, we’re excited to be featuring Leigh Stewart. Leigh is the infrastructure lead at Ubiquity6 where he’s in charge of keeping the infrastructure team moving in the right direction, guiding technology decisions, and helping the folks around him level up. Leigh is originally from Vancouver and has been a software engineer for over a decade at major tech companies like Microsoft and Twitter.

What got you interested in AR?

I was always a big near-future science-fiction fan starting with cyberpunk books like William Gibson’s Neuromancer and later on, books like Charles Stross’ Halting State, or Vernor Vinge’s Rainbow’s End. Reading many of these books as a kid, I was always excited about AR as it is often a major theme in all of these books. I always felt that the technologically advanced worlds that were being depicted might be conceivable in my lifetime.

When I was thinking of leaving Twitter, I knew I wanted to work on something very different with more of a domain-specific focus. The more I looked into the AR space, the more I became convinced that we were sufficiently close to building real things and that the space would explode in the next few years.

Why did you decide to join Ubiquity6?

I decided to join Ubiquity6 because I was impressed by the team and the ambitions they had for the technology. The founders had a clear vision for what they wanted to accomplish and I felt like our motivations we’re aligned — the company was actually inspired by Rainbow’s End which I had read 10 years ago and we all agreed it was a world we wanted to build.

Ubiquity6 was also the only company where I felt they were directly targeting some of the biggest problems in the space such as mapping, localization, multi-player, and persistence. While other AR companies were focused on specific applications, Ubiquity6 was building the foundational tech to power the next 10 years in consumer AR technology.

What are you working on at Ubiquity6?

Right now we’re in the process of rebuilding one of our most important backend systems — a mapping pipeline we call Atlas — to make sure it will be able to handle any scaling challenges we throw at it in the next 2–3 years. Atlas is the core of the Ubiquity stack. It is a highly data-intensive real-time computer vision data pipeline used to power the AR experiences you see in the app. The reason it’s so data-intensive is because even for a single active user the amount of parallelism we need to support to provide high quality computer vision inference in realtime is very high. Every image that arrives at the mapping pipeline has to be fanned out to a large number of processors in several stages and this presents a lot of interesting challenges around consistency, scalability, and reliability. The old system scaled reasonably well, but we know we’re going to need to support tighter real time guarantees and more predictable behavior at higher scale. Most of the work we’re doing is focused on moving to lean more heavily on very scalable data stores, and organizing the pipeline more cleanly into a traditional streaming processing model.

How has your past experience helped you with what you are tackling at Ubiquity6?

My experience building systems software at Microsoft and very high-scale distributed systems at Twitter has left me with pretty deep insight into how to build and deliver the kind of code you find at the core of the U6 stack. In addition to guiding software engineering choices I’m helping the team build out a scalable low-touch operations and monitoring system to make sure we can reliably deliver high quality software as we scale up the app. On top of that I try to invest a fair bit of time helping less experienced engineers grow in their roles via lots of discussion and mentorship, and by helping with design and implementation on complex projects.

What is the favorite part about working for Ubiquity6?

The best thing about Ubiquity6 is working with everyone here to solve hard problems and just get things done. Everything about the work is my favorite part and I don’t just mean from a technical challenge perspective — everything about the technology we are building to the interpersonal relationships on the team, to teamwork and leadership are all extremely positive and exciting. As I’ve gotten to know the team here better and as we’ve developed more systems and rapport, I’ve been impressed with how well and how effectively we work together. This is a very high performing team and even with a bunch of strong personalities, we are always able to make good decisions and move the company forward at an incredibly fast pace.


By
Anjney Midha
May 16, 2018

Leigh Stewart: Infrastructure at Ubiquity6

It’s been a wild three months.

From graffiti filled streets in Spain to underground bunkers in Sausalito, subway stations in Tokyo to industrial kitchens in Texas, the Display.land community has been blowing our minds every morning by capturing, sharing and exploring each others’ spaces from around the world during our early access period.


To learn more about our story of getting to here and where we’re going, keep reading, but if you just want to try it out yourself, head on over to the iOS or Android stores to download and start creating!

What You Can do Today

Capture any space with the device you already own — from as small as a courtyard to entire city blocks
Edit insanely fast — changes you make to your spaces are rendered and saved in real time.
Instantly share your spaces via web links and videos, or freely export them as 3D models
Explore the world and join a community of global explorers in 50 countries

How We Got Here

We started U6 with the mission to unlock new ways for people to create and connect in the physical spaces they care about, such as our PlaySFMOMA space last year.
To create that experience, we captured the SF MOMA’s physical space in 3D using a commodity smartphone, edited and authored it remotely in a web browser, and allowed hundreds of people to browse and experience the sandbox together in real time from their own devices onsite in AR, and remotely via desktop and webVR browsers.

With today’s release, we’re beginning to put those same tools in everybody’s hands, with the goal of building and improving our roadmap in public with our community.

Where We’re Going: Editing Reality Together

Unlocking a new digital canvas for creativity and shared experiences.
Our goal is to grow Display.land into a destination where people can create, share and explore together in new, immersive and interactive ways.

We believe the best way to achieve this is by releasing often and publicly, supporting our earliest creators, and constantly increasing access to creative tools only previously available to high end gaming, graphics and 3D professionals. In the coming months, you can expect to see regular updates along this path.

Display.land is for those of us who see art in reality. If this sounds like something you’re interested in working on, shoot us a note! We’re working on some of the hardest challenges in computer vision, graphics and multiplayer networking and are hiring actively.

-Anjney & Ankit
Co-founders, Ubiquity6

This is the first of a series of blog posts featuring some of the awesome talent we have working on the technology behind Ubiquity6.

This week, we’re excited to be featuring Leigh Stewart. Leigh is the infrastructure lead at Ubiquity6 where he’s in charge of keeping the infrastructure team moving in the right direction, guiding technology decisions, and helping the folks around him level up. Leigh is originally from Vancouver and has been a software engineer for over a decade at major tech companies like Microsoft and Twitter.

What got you interested in AR?

I was always a big near-future science-fiction fan starting with cyberpunk books like William Gibson’s Neuromancer and later on, books like Charles Stross’ Halting State, or Vernor Vinge’s Rainbow’s End. Reading many of these books as a kid, I was always excited about AR as it is often a major theme in all of these books. I always felt that the technologically advanced worlds that were being depicted might be conceivable in my lifetime.

When I was thinking of leaving Twitter, I knew I wanted to work on something very different with more of a domain-specific focus. The more I looked into the AR space, the more I became convinced that we were sufficiently close to building real things and that the space would explode in the next few years.

Why did you decide to join Ubiquity6?

I decided to join Ubiquity6 because I was impressed by the team and the ambitions they had for the technology. The founders had a clear vision for what they wanted to accomplish and I felt like our motivations we’re aligned — the company was actually inspired by Rainbow’s End which I had read 10 years ago and we all agreed it was a world we wanted to build.

Ubiquity6 was also the only company where I felt they were directly targeting some of the biggest problems in the space such as mapping, localization, multi-player, and persistence. While other AR companies were focused on specific applications, Ubiquity6 was building the foundational tech to power the next 10 years in consumer AR technology.

What are you working on at Ubiquity6?

Right now we’re in the process of rebuilding one of our most important backend systems — a mapping pipeline we call Atlas — to make sure it will be able to handle any scaling challenges we throw at it in the next 2–3 years. Atlas is the core of the Ubiquity stack. It is a highly data-intensive real-time computer vision data pipeline used to power the AR experiences you see in the app. The reason it’s so data-intensive is because even for a single active user the amount of parallelism we need to support to provide high quality computer vision inference in realtime is very high. Every image that arrives at the mapping pipeline has to be fanned out to a large number of processors in several stages and this presents a lot of interesting challenges around consistency, scalability, and reliability. The old system scaled reasonably well, but we know we’re going to need to support tighter real time guarantees and more predictable behavior at higher scale. Most of the work we’re doing is focused on moving to lean more heavily on very scalable data stores, and organizing the pipeline more cleanly into a traditional streaming processing model.

How has your past experience helped you with what you are tackling at Ubiquity6?

My experience building systems software at Microsoft and very high-scale distributed systems at Twitter has left me with pretty deep insight into how to build and deliver the kind of code you find at the core of the U6 stack. In addition to guiding software engineering choices I’m helping the team build out a scalable low-touch operations and monitoring system to make sure we can reliably deliver high quality software as we scale up the app. On top of that I try to invest a fair bit of time helping less experienced engineers grow in their roles via lots of discussion and mentorship, and by helping with design and implementation on complex projects.

What is the favorite part about working for Ubiquity6?

The best thing about Ubiquity6 is working with everyone here to solve hard problems and just get things done. Everything about the work is my favorite part and I don’t just mean from a technical challenge perspective — everything about the technology we are building to the interpersonal relationships on the team, to teamwork and leadership are all extremely positive and exciting. As I’ve gotten to know the team here better and as we’ve developed more systems and rapport, I’ve been impressed with how well and how effectively we work together. This is a very high performing team and even with a bunch of strong personalities, we are always able to make good decisions and move the company forward at an incredibly fast pace.


By
Anjney Midha
May 16, 2018

Leigh Stewart: Infrastructure at Ubiquity6

Creators! Did you know that you can download any of your captures as a 3D model? 

We currently support OBJ, GLTF, and PLY file formats, which make it possible to use captures from Display.land in Blender, Cinema4D, Unity, Unreal, Maya, and most other popular creative software applications.

To download the 3D mesh, just open one of your captures in a desktop web browser, and click the download button in the upper-right corner of the screen. You’ll need to be logged in to see it.

If you are on iOS, the easiest way to open your capture in a browser is through AirDrop. You can AirDrop yourself the link to the desired capture by opening the Share Menu and pressing “More.” From there, a new menu will open giving you the option to airdrop the link. 

If you are on Android, we find it is easiest to email yourself the link. Open the Share Menu and press “More.” From there, choose email or whatever the best option is for you.
Once your 3D mesh has downloaded, this is where the magic begins. You now have the opportunity to create phenomenal artwork using captured physical reality. Try challenging the mundane by drawing in the absurd.

Or, experiment with contrasting elements. The opportunities are endless and the boundaries are limitless. Check out what our creators have made with Display.land below.
We absolutely love to see what our Creators create using their Display.land meshes. In fact, we have an entire Discord channel dedicated to them! You can find this channel here: https://discord.gg/b2vxQpu.
We can’t wait for you to join and to see how Display.land has inspired you. ✨