Articles, Blog

Google I/O 2014 – A 3D tablet, an OSCAR, and a little cash. Tango, Spotlight, Ara. ATAP.

and gentlemen, please welcome Regina Dugan. REGINA DUGAN: For
the next 45 minutes, you’re going to get a glimpse
of a small band of pirates trying to do epic shit. A small band of pirates
in a very fast boat. Welcome to ATAP. Here we don’t tinker. We build new things. Sometimes seemingly
impossible things. We are optimized for speed. It is an essential
characteristic of our work. Our small internal
team is connected to hundreds of external teams. It’s how we tackle hard problems
at the intersection of high end push to the edge
hardware and software, at the intersection of tech and
art, science and application, problems you can’t
solve by yourself. And it’s how we do so
without compromising the tech or the beauty
or a sense of soul. We have protectors of all
of these elements with us. There are 11 projects
ongoing at present in ATAP and partners all around them. This is the global
web of our partners. We believe this yields
better solutions and faster. In the last two years we have
worked with 305 partners, in 22 countries, on
three continents. Universities, start-ups,
large system integrators, governments, and nonprofits. The answer is out
there somewhere if we are just humble
enough to find it. This approach also lets
us do large scale research and ship both. We dare to dream and do. ATAP is full of doer
dreamers like many of you. And our goal is to close the
gap between what if and what is. We’re going to talk a
lot of tech this morning. We’ll talk about 3
out of the 11 projects currently ongoing in ATAP. Three projects in 45 minutes. They are proof points. Johnny Lee, Paul Eremenko,
and Rachid El Guerrab lead these projects. They are ATAP technical
project leads. They come for two years. No one comes to build
a career in ATAP. You come to build something,
to do your best work. We’re going to start off with
Johnny, interface technology expert, core
contributor to Kinect, a top rated Ted
talk using Wiimotes. Now Project Tango lead, trying
to make the future awesome, and a tablet that sees in 3D. Paul will be up next. Evangelist for the power
of open, complexity geek, done designer, rocket
scientist, and occasional pilot. MIT, Caltech, Georgetown,
currently head of Project Ara. And a guest appearance
from Glen Keane. Animator, storyteller,
and champion of the hand drawn line. Disney legend. Creator of beloved characters
from Ariel to Aladdin. And now ATAP upcoming
spotlight story, “duet.” Johnny, the floor is yours. JOHNNY LEE: Well, thanks. Thanks for that warm welcome. It’s a pleasure to be
here and I’m really excited to be able to talk
you guys about Project Tango. So Project Tango is an effort
that we’ve been pushing on to try to give devices a human
scale understanding of space and motion. Each of us do something
remarkable everyday. You sitting in your
seat roughly understand the size of this room, as well
as the position and orientation of the person sitting next
to you, as well as yourself. And this is a spatial
awareness is remarkable, yet we take it for
granted every day. Because it’s a human
perception system that does this just for free. But our phones, or
tablets, and our laptops have no understanding of
the spacial relationship. Yet, it’s so
fundamental to the way that we interact with each
other as well as the way we interact with things. So, what if you had
this in a device? What could you do? Well imagine if they directions
to your last destination didn’t stop just
at the front door, but could actually take
you to the exact room that you want to get to. Or it could allow the visually
impaired navigate spaces that they’ve never
been in before. You could play
games in your house where you use the
furniture as castles or you play hide and seek with
game characters who actually know where your closet
is and can go hide there. You could also enable emerging
applications such as robotics, such as allowing free
flying robots to navigate through the space station. Which is, we have one of
our Project Tango devices going up in August into the ISS. Now the reason we
think we can do this now is because, if you look
at the amount of computing power available on
mobile processors, it is grown exponentially
as with everything else. This is a very common
Moore’s law chart, I’m sure you’re familiar
with, with an example, mobile processors. And today we have processes
like the Tegra K1, with a tremendous amount
of computing power. But what’s really
interesting is, if you plot another device
on this chart, which is the vehicle that won the
2005 DARPA Grand Challenge. So the modern processors we
have and that we can buy today actually exceeds the
amount of compute necessary to drive
132 miles autonomously through the Mojave Desert. So the compute is here. The compute is genuinely
here to do amazing things with our devices. What’s missing is the
hardware and software. So Project Tango
is a focused effort to work with the hardware
and software ecosystem to advance the state of 3D
sensing on mobile hardware. As Regina mentioned, one
of the way ATAP operates, we work with a very large
network of partners. We work with device
manufacturing, engineering support, processor vendors, IMU,
gyro and accelerometer vendors, lenses, cameras sensors,
depth sensors, optics, partners within Google,
computer vision companies, and universities all
spanning nine countries around the world. So let me give you a quick
tour of our hardware journey up to where we’ve been to today. So we’ve actually built four
platforms over the past 18 months. Each of these were
built to answer a very specific question. And I’ll just walk
through them quickly. First we built a USB peripheral
with commodities parts. Commodity cameras,
commodity sensors. And this was to ask
can we actually run these decades of
robotics and computers and algorithms on
consumer grade hardware? And the answer was, yes, we can. The second prototype
was a tablet we built in three months. And the question
was to ask, can we actually run all
these algorithms on a mobile processor? And the answer was also yes. The phone prototype we talked
about earlier this year was our effort to
reduce the size of the lenses and the cameras
to respect the form factor requirements to fit in the
modern phone or tablet, which is things like this six
millimeters z height for the sensors. And indeed, these
devices also did work. Now we have the culmination
of the work of everyone within our network over the past
18 months to bring us to this. this is our current
prototype, our DevKit that will be making
available next year. And we build this
device from the ground up to do 3D and from the
ground up to for compute. It has our high performance,
4 megapixel, 2 micron camera. This is a very high speed
light sensitive sensor. We have our customized
motion tracking camera that allows the
device to understand it’s motion in 3D space. We’ve also worked
with hardware vendors to force the improvements of
the performance of these devices to fit into a device
to do 3D sensing. That gives us geometry about
the floor and the walls. Then we found the–
well, one a partner that was interested in building
this device with us– to put in the most powerful processes
that we could find and pack it with as much RAM and
storage as a laptop. So this was designed
for developers to explore 3D compute. A little peak into
the software side now. On the left side, you’ll
see the fish eye image. If you think about
human vision, we have this amazing
peripheral vision. We are able to see
far out to the sides, but we also have this
area of the center there we have detail, our foveated
region, and what these two cameras do is give us something
analogous to human vision. Where we have a wide fish eye
camera and a more traditional field of view camera as well. You’ll also notice
in the bottom left that there’s these
little white dots. This is actually a
carefully timestamped gyro and accelerometer data. The motion sensors in a phone is
very much similar to the motion sensors you have an inner ear. So this allows us to have
both the eyes and the motion sensing capabilities
of human perception. A little bit about
the depth sensor. At a very simplistic
level, it essentially it’s a sensor that sees
shape instead of color. On the left you’ll see a
more traditional image, taken with a camera. You can see all the color
and shading and lighting of the scene. But on the right,
you’ll actually see this is what the
depth sensor sees. It just gives us information
about the contours and the shape of all the
furniture, regardless of the color, and
to some degree, independence of the
lighting conditions. When we combine all the tracking
data and the sensing data together, we end up
being able to fuse it into a single estimate
of both the devices position and the environment. This is a video of
Joel Hesch, who’s one of the computer vision
engineers on the team. You’ll see on the
left side, this is the raw data coming into the
system, the camera, the motion sensors on the bottom. And what we compute is
what’s on the right, which is the trajectory of
the device in real time. So what he’s doing
is he’s walking around the first floor of this
40,000 square foot building. And you can actually
see in real time it’s estimating his position
throughout that space. Now because we just use the
cameras and the motion sensors, this is a full 3D
directory, it is not restricted to a single plane. [APPLAUSE] Thanks. And you can actually
see the sort of coil as he goes up the stairwell. Now remember, there’s no
GPS, there’s no Wi-Fi, there’s no Bluetooth. This is just using the cameras
and the motion sensors. The only requirement that
we have in the environment is we have light. Which is similar to the
[INAUDIBLE] of requirement that you have to walk
through the space. What he’s doing here is he’s now
walked across, up five flights of stairs, across this entire
building, down five flights of stairs, back to
his original location. And this is a very
simple test for us to understand how
well we’re doing. And it turns out that we
have about 1% of drift over path length driven. [APPLAUSE] Thanks. Now, when we combine
the tracking information with the depth
sensor, we actually are now able to capture
geometry of the environment. This is Yvonne, one of the
interns on the project. This is a false color image
where red is low and blue is high. But you can see that it’s
capturing the floor, the walls, and the stairs as we walk up. In just a second it’s going to
show you this top down view. Which you can
essentially see that, even after five
flights of travel, we can still see down the
middle of the stairwell. Again, the accuracy in
alignment of the data is on the order of 1%. So you’ve probably seen stands
like this with 10,000 or 1,000 laser scanners and
industrial scanning. But what’s new is
the push to make this happen on the
consumer scale device. Now, scanning stairwells
isn’t something most people need to do. It’s actually a
nice test structure because this has x, y,
and the z variation so we can see our accuracy
along every dimension. But you can imagine,
once we get this into the hands of
consumers, they can do things like capture
the geometry of their house. So this is me walking
around with one the prototypes, walking
around my house. Again, red is the floor
and blues is the walls. But this is me walking around
my living room, a laundry room, guest bathroom,
and other bedroom. I’m basically walking
around my house as quickly as I
would naturally walk. I don’t have to move
particularly slowly, I just sort of point it as
though I’m giving someone else a tour of my house. In a second it’s
going to show you a zoom in of one of the rooms. You can see that the real time
structure is relatively coarse, but this is already enough
geometry for game developers or some one who had to make
a game where soldiers had to like attack your
bathtub if you wanted to. But if you actually capture
the data and store it, you can do much better. So this is a partner
called Matterport where we get one of our devices. And if you store the data,
and do offline processing, the quality that you can
produce from these devices is much higher. This is cool stuff, isn’t it? So we’re going to
switch to the tablet to show you some real demos,
some real life real time demos. Do we have a tablet? Hello? All right, great. So we have been having a
little bit of HDMI trouble, so there will be
some black screens, but we’re supposed to
recover from those. OK, so let me first
show you the tracking. So this is our 7 inch prototype
tablet development kit. And you can see, what’s on
the left side is basically the image from the camera. There you go, stabilize please. So we can see the fish eye lens. All right. You can see the fish
eye lens on the left and also the hardware
accelerated tracking. Come on, guys. OK, I’ll hold the cable. Come on. All right, maybe if
I do it this way. All right, so if you see
they camera lens on the left. Wow, you can do it. All right, let’s switch
to the other device. Apologies. AV issues. So we have the camera
on the left, all right. We are going to
switch to that TV. That is our third backup. Well, I don’t want to
risk it jumping out, so I apologize to you guys. Can we switch to that TV? And can we get the camera
on the TV up to this screen? Otherwise we’ll just,
don’t want this to be, this is still this tablet. All right. So on the left side, you’ll see
the camera that actually has– [CHEERING] –yes, the [INAUDIBLE]. On the left side, you
see the fish eye lens, and you actually see these
hard work accelerated feature tracks. And this basically gives us
the motion of the device. The gyro and accelerometer
of these little wave forms underneath. Now, if I turn the
camera left and right, you see the cone
swiping back and forth. But what’s different
is that we’re actually able to track the motion. So if I move left and right it’s
actually tracking my position. And if I make a big
circle, it actually is tracking me in real time. So if I wasn’t
tethered to this cable, I could actually just
unplug, and actually walk through the
entire Moscone Center and it would be tracking my
position in full 6 degree freedom, continuously. Now let me give you a
quick example of some demos that we’ve built using this. These are all built inside
of the Unity game engine. And this is an something
extremely simple puzzle game, where if I move the
yellow cube and put it on the yellow switch, it
makes more blocks appear. But you can see the
blue cube and the blues are far separated. So I actually have to move
forward to hit this switch. And because I can’t actually
reach this green switch, I’m actually going
to have to throw it. Ah, I threw it too far. Here I go. Yes! This is another tech
demo that we built inside of Unity game engine. And if you imagine once you
have the geometry of your house, and you want to create
sort of fantasy lands in different rooms, you
can use the device just to control the camera
as you look around. But you can see there’s
this wizard on the ground, but he’s only about
six inches tall. So if I want to get down
to his level the world, all I have to do is squat down. As if he’s right in front of me. So I can look at the
trees and the stones and then sort of interact
with him directly. But if I want to interact
with the main map, I just sort of stand back up,
and say, hey go over there. And he’ll sort of walk
over in that direction. The other demo I
want to show you is something I that one
of our university partners just got working very recently. And this combines both the depth
sensing data and the tracking data together. So what’s going on
here is I’m actually building a 3-D map of
the stage in real time. [GROANING] Here we go. Come on. We learned a lot of new
things on this project, so. So I can sort of
map up this wall and it’ll start to
texture it and capture it as I walk around. So as the hardware and
software both become better, this type of technology will
become part of the tools that we want to provide,
but it’s not there today. We’re currently working actively
with both companies as well as universities to improve
the software [INAUDIBLE]. So can we go back to the slides? So, as I mentioned, we want
to do this in collaboration with both the hardware and
software entities out there. We’re excited to announce that
we’ve started early engagement with LG to make a consumer
scale device next year. We have early integrations
with both Unity and the Unreal engine
and Qualcomm Vuforia, so if you already know how
to work with these tools, you can build a product
tango enable app. I encourage you guys to go
out to the sandbox area, try some of the demos. These are partners that have
gotten early development units and started doing demos. And there’s a lot
more work to do. And if you want to sign up
for DevKit, go to the website or go to the sandbox. There’s a tremendous
amount of new work to do when we start thinking
about what happens when our devices have the
sense of awareness. And I want to work
with each of you because I genuinely think
the future is awesome. Thank you. REGINA DUGAN: Thanks, Johnny. Thanks, Johnny. Normally when demos
fail for Johnny, he does a little jig on stage. So we miss that part of it. Project Tango is one of
ATAP’s most mature projects. Project Ara in an earlier stage. Both capitalize on advances
in mobile computing, miniaturization,
optimization of electronics, and the opportunities that
result at that intersection. They are both
challenging what we believe to be possible
in a mobile platform. Tango and Ara have
accomplished in months what would normally take years. That’s not an odd coincidence. It’s the result of
a core belief or us. Namely, that open
wins over closed and that speed is
essential to innovation. To give you a sense
of ATAP’s speed, let’s take a look at
the last two years. Now, ATAP was born
on May 22, 2012. We are two years, one month,
and four days old today. And in that time, 11
projects have been born. From acoustics to
wearables, we’ve shipped multiple
products to scale. “Windy Day” and “Buggy Night,”
our first two spotlight stories. And Skip, a NFC authentication
token among them. Soon, you’ll be able to
authenticate your Moto X with the next generation of
NFC Auth, a digital tattoo that lasts for five days. We built an interdisciplinary
team of 114, from statistical ethnographers
to Oscar winning directors. Our Skunkworks shop can build
almost anything and fast. We can build, cut, bend
and take things apart. On June 21 last year, we signed
a multi-university research agreement with 8 of the
country’s top universities. From Caltech to MIT,
Texas to Illinois. And then 8 turned into 16. It doesn’t take
us 9 to 12 months to contract with
researchers anymore. It takes us less than 30 days. We’ve had two parent companies,
one lock picking class, several engagements,
two weddings, and 6 baby pirates born. That’s a fast boat. Paul is the technical
lead for Project Ara. And if you want to
see fast, watch Paul. Paul. PAUL EREMENKO:
Thank you, Regina. As Regina mentioned, I am
the technical project lead for Project Ara. What if we asked better
questions of our phones? Like what if a phone
could see in the dark? Or what if a phone could
test if waters clean? What if I could share the
best parts of my phone? What if a phone could? We think that a phone should
and a modular phone platform might just make all of
these things possible. There are lots of challenges. True. So much so that many have said
it couldn’t be done at all. But we decided to
give it a shot. And in ATAP fashion we
started by turning statements like it’s impossible
into numbers. What exactly does
impossible mean? Now the principal challenge
to modularity is overhead. What we found is that Moore’s
law, the miniaturization of electromechanical components,
and a modern data protocol could get the modularity of
the penalty in the system level down to about 25% across
the board in PCB area, in device weight, and in
overall power consumption. In exchange users would
have the flexibility to turn the phone into a
solution to an old problem or to turn their phone into
a new possibility altogether. To turn their phone into
a means of choosing. Why choose a phone
for its camera when instead you could choose
a camera for your phone? Why can’t I slide in a
module that’s my key fob, then take it out,
give it to a valet? Why not share the most
expensive sensor or component among my friends, my family,
or perhaps across a village? Think of Ara as a versatile
computing platform. One where development
of each element is paced by the limits of
our collective imagination and the capabilities to
build new amazing things. Think of it as an analog to
the Android app ecosystem, just in hardware. So we assembled a team. A small technical
team within ATAP. 20 partners, some 150 people
across three continents. Universities, major chip
makers, industrial designers, interaction engineers,
many, many others. And the goal is to have the team
iterate advances in one area and form another, a process
of constant trade offs and cooperative effort
to close the design across the disciplines
and across the teams. If we did it linearly it
would take us a decade. Instead– FEMALE SPEAKER: Let’s boot it. PAUL EREMENKO: Nine
months into the project, we have our first functional
form factor prototype. There is lots more to do, but
we’re off to a good start. This is Ara being born. FEMALE SPEAKER: Oh! [CHEERING] PAUL EREMENKO: As you see,
just a few short weeks ago, the phone was tethered
to a laboratory bench. Since then we have
cut the umbilical and we’ve exercised
many of its features. So I invite you to
see for yourself. This is the Spiral
One prototype. If we can switch to
the shoulder cam. There you go. This is the spiral
one prototype. It uses FPGA to implement
a packet switch network on device, using the industry
standard MIPI UniPro protocol. It also has a flexible power
bus that allows any module to be a power source, a power
sync, or a power storage / and it supports, in
limited fashion for now, the hot swapping of
batteries and other modules. So shall we see if we
can get it to boot today? This is Seth Newburg, he’s
out chief electrical engineer, and he’s going to
be power it on. Here’s what you should expect. About 10 seconds into
the boot sequence, the power bus will initialize. The LED– there’s an LED
module on the back, which Seth will show you in
a bit– it’ll come on. About 30 seconds into the boot
sequence, if all goes well, the display module
should initialize, and the screen will
do a quick flash. But 35 seconds into the boot
sequence, the Linux kernel will boot, and the Android
logo will appear on the screen. And, hopefully, fingers crossed,
at 60 seconds the Android home screen will appear
on the device. So, Seth, go ahead. There’s the LED. Flash, this is promising. All right. Just a little bit further,
just a little bit further. Ooh. Well, we’re most
of the way there. OK. So, maybe well let Seth
power cycle it and try again. I assume we can’t recover
from that particular screen without a power reboot. In the meantime, let’s
go back to slides. Patrick, you can
relax for a second. And we’ll call you back if we
reestablish the home screen there. So, let me talk about
what’s difficult about this, other than actually
getting the phone to boot. There are many
technical challenges that must be overcome and I’m
going to talk about just a few of them. I’d like to talk about antenna
design, about the interconnects that go into making the
device work, about a software architecture that supports
modularity of the device level, and making it beautiful,
the industrial design and the aesthetics
of the device. Which may actually be one of
the hardest challenges of all. So, cellular and Wi-Fi
antennas in a user configurable modular device
pose a particularly unique challenge. Our approach has
been to use computer optimized conductive
grade antennas, developed by one of our
partners X5 Systems, and to leverage the endoskeleton
frames metallic structure as part of the antenna system. We’re also experimenting with
3D printing the antenna using conducted inks as part
of the module shells. Now, to reduce the
modularity overhead, we decided on a
contact less approach to the data interconnections
between the modules and the endoskeleton. This allows us to save
precious volume in PCB area, but capacitive or inductive
data transmission mechanisms are lossy. And minimizing insertion loss,
across a range of frequencies, is hard. Interestingly, the
challenge is actually not at the high frequencies,
but rather efficiently supporting the transmission
of low bit rate data, at the low power gears
of the data protocol. How are we doing there Seth? No luck yet. OK. The elector permanent
magnets alleviate the need for
mechanical connectors or latches in attaching the
module to the endoskeleton frame. EPMs are magnets that
are passive in both the off state and the on state
and take a short current pulse in order to switch
between those two states. EPMs are a proven
technology that’s been used in industrial
lift and crane applications and they’ve been
around for decades. For Project Ara, we had
two miniaturized them by a factor of 1,000. From something
that can lift a car to something that can hold
the weight of a small kitchen. The current prototype
platform relies on custom kernel
drivers for each module. This approach is neither
scalable nor secure, given an open ecosystem
of third party developers as we envisage. The network stack,
as I mentioned, employs the MIPI
UniPro protocol. In future spirals of
the platform, slated for later this year and
early next calendar year, the Android kernel will
utilize generic class drivers for UniPro, with
user space components for any additional
functionality or for non-class conforming devices. Yes, this is going
to require changes to Android to make it modular
and to support hardware hot plug. In this regard, Ara
is a stress test to see what Android can do
in applications that stretch beyond the traditional
smartphone. Now, let’s talk
power for a second. Battery technology has been
advancing rapidly, just not so much in smart phones. Today the tech is here to make
a battery with triple the energy density of a conventional
cellphone battery. An example is the silicon
lithium ion layer technology. But the battery will
have reduced cycle life. Modularity opens up new
opportunities for innovation and getting it to
market quickly. And the user can
choose the technology based on their specific
need or use case. So, such a battery
with more than make up for the increased
power consumption of a modular architecture. But if you don’t
want a new battery, you get hot-swap
a regular battery module to essentially get any
battery life that you want. Now putting all
this together, we sought an industrial
design they can be both modular and beautiful. It must overcome
the connotations of boxiness and brick
like that people associate with modularity. And it also has to close from
an electrical and functional perspectives. With our industrial design
partner, NewDealDesign, we strive for smooth,
sleek looking modules, without traditional connectors,
and a parcelling scheme that celebrates rather than conceals
the modularity of the device. As well as aesthetic
customization to give users the expressive
capability well beyond simply selecting the color
of their phone. To that end, we’re
working with our partners to develop a new
production 3D printer that operates at 50 times
the speed of existing 3D printing technology. It will yield full 600 DPI color
in hard, soft, and conductive materials. Where after strength
and surface finish comparable to that of
consumer grade plastics. Except, of course, the
color shape and texture can be entirely unique from user
to user and for module shell to module shell. OK, enough about the challenges. Let me show you what
is under the hoot. What goes into an RM module. So this is a close
up of a Wi-Fi module. It has spring pins for now,
in the spiral one prototype, in place of the contact-less
pads that I talked about. It has two
electropermanent magnets to support the insertion of
the module either in landscape or portrait orientation. There are a number
discrete components that you see up there
for power management and for driving the
electropermanent magnets. These will be replaced with
an integrated PMIC, or Power Management Integrated Circuit. There is currently
a rather large FPGA that serves as our
UniPro network processor. It will be replaced
with a UniPro bridge ASIC in the next
couple of months. And lastly, of
course, depicted here is the Wi-Fi base band processor
and an antenna connector in the upper right hand
corner of the slide. In this first spiral, about
65 to 70% of the module is consumed by modular overhead. Things you wouldn’t have
to regular smartphone, in other words. That leaves about 30 to 35%
of the module for developer unique functionality. By October, we expect
to have our Spiral 2 platform and prototype
built around custom ASICs for the UniPro
network processing. This will bring the usable
area for module developers to somewhere around 70 or
75% of the modular area. And while doing exact
silicon area estimates is kind of a challenge,
in the long run we expect that native
adoption of UniPro across a wide range
of peripherals will get the modularity overhead
down to approximately 10 to 15% of each individual
modules PCB area. In the meantime,
however, we think that there are a lot
of interesting things that can be done even on
the current Ara platform. And so today, I’m pleased
to announce the first in a series of price challenges
for ARA module developers. We will award $100,000 to the
developer of a novel module aimed at daily use that enable
something that you cannot currently do with a smartphone. We encourage teams. And the module must
be working when it is submitted
to us for judging. The first two runners up will
get all expense paid trips to the next Ara developer
event in the fall. We’re making a set of developer
hardware available to price challenge participants
along with the latest release of our module
developer’s kit or MDK. Guys, this will be
really hard, but we’re going to do this together. We’re making a supply
chain for EPMs. We’re developing the processes
needed for shell fabrication. The UniPro ASICs are
well on their way. Expect a prototype version of
Android with modularity feature support sometime in the fall. And the MDK is already out. Download it, check it out. So if we were to ask better
questions of our phone, maybe they would look
it just booted. No, I’m kidding. I’m kidding. If there’s anything
I’ve learned over time, it’s that stories move us. In his book entitled
“The Golden Theme,” Brian McDonald wrote
about the universal truth of storytelling. He argues that there is one
golden theme in stories. From Westerns to science
fiction, myths, to fairy tales. That truth is all any story
worth telling is getting at. And in this regard,
he says, he believes he has discovered
the single underlying truth that links all stories. We are all the same. Spotlight Stories is about
finding the age old truth of storytelling in
modern technology. On October 29th last year,
Moto X users got a little gift. A red hat that danced
across their screen. The red hat marked
an entry to a portal. A portal to an
interactive, immersive, world where a mouse named Pepe
learned that coveting a red hat is dangerous business. And users started smiling
at their phones in new ways. [MUSIC PLAYING] “Windy Day” was the first
of ATAP spotlight stories, a new storytelling format
made uniquely for mobile. It is at the intersection
of hardware, software, and content. Art and technology. Today smart phones have
graphics processing capabilities equal to game consoles. So we asked what could
do with all that power? This industry spends
billions every year making the tasks of our
lives more efficient. But what about the entire
emotional landscape of our lives? If you want to do something
that touches people emotionally, you go running to storytelling. Perhaps in the
advances of mobile, we might find a new creative
canvas for storytelling. So we asked the best
to help us find out. Oscar winning
director Jan Pinkava. Oscar winning producer
Karen Dufilho. The artists who animated Woody
in Toy Story, Doug Sweetland. Character Animator Mark Oftedal. Art director and Caldecott
medal winner, Jon Klassen. Animators, modelers,
and sound experts from eight different
countries descended on ATAP. They joined our technical team
and they started to build. What together they delivered
was a simple story. A narrative with a
beginning, middle, and end where your phone is
not a small screen at all, but a window
to a new world. “Windy Day” is a technical feet. It is rendered in real time
at 60 frames per second. Indeed, it is the first
ever real time rendering implementation of Pixar’s open
graphics standard, OpenSubdiv. And simultaneously, the
first ever use on mobile. It required an intimate
understanding of the graphics pipeline, from the
GPU to that OS, through scheduling, all the
way to the high level rendering engine. And a rethinking of
tessellation to fit the hardware requirements of real time. And it required
hardware engineers, those intimately familiar
with GPUs and IMUs. Because the IMU
sensor data is what told us where you were
looking in the story, so we’d know what to render. And if the following us to
feel like a window, invisible, the sensor Fusion
performance had to improve. So we implemented precision
planetary landing algorithms to make the interaction fluid. We are building a
story Development Kit that will enable new
stories to be written. One day, we hope, you’ll have
a new type of Film Festival. A Film Festival in your pocket. And our second story turned
the forest of “Windy Day” in to a “buggy night.” And we continued our
conversation with artists. We have this new format,
what would you do with it? And one of those conversations
was with Glen Keane. Glen Keane is a
legendary animator, a singularity on this planet. He was with Disney for 38 years. Glen wanted to draw it again,
but with a graphite pencil. That meant that he was to
become our rendering engine. Only, he’d render
on paper, and in 2D. Now Glen is at once an
amazing rendering engine, but he’s very high latency. He just can’t draw in
real time on the screen. Glen’s art challenged the
tech of spotlight stories in entirely new ways. And the Spotlight Story
studio became a place where animators and
engineers sat side by side. Where Glenn taught us to draw
and we taught him if, then, else. Now he flipped a few
things on us too. He turned the rendering timing
problem completely upside down. In CG, the time of
the frame is chosen, and the rendering engine
creates a perfectly tuned image. But because Glen was our
engine, all animation timing had to be queued
off the image drawn. There’s no interpolation
between hand drawn images. This required better than 16
millisecond timing accuracy. Every piece of the pipeline
had to become high precision. It took three
months to implement an entirely new
timing architecture. Everything from the animation
system, to the camera had to behave in reverse order. And when it’s wrong, it’s wrong. Ghosting occurs or
characters miss their cues. We flipped a few
problems on Glen. Traditional animation is
drawn at 24 frames per second. But on a mobile
device, everything is waiting for frames
at 60 frames per second. That meant that Glen had to draw
a not in 24 frames per second, but in 60 frames per second. And he had to draw in three
point perspective at scale. Glen’s story “duet” contains
10,055 original drawings. And these are just the
frames that are visible. Many more we’re not used. So we had to develop
a filing system. This is Glen’s and this is ours. And we had to
recover occasionally from file corruption. Compression became
critically important. Glen is able to create
seamless transitions, like the transformation of
characters as they grow. In fact, it’s so
seamless you don’t feel there’s anything
unnatural about it. No technique in CG will
allow that to happen. No mathematical
encoding will enable you to do such a transformation. That meant what was once a
mathematical representation of the line, now was a
graphite stardust field. And each drawing had to be
mapped exactly to the screen resolution so that it
feels as if the line is drawn right in front of you. Pixelation destroys
the life of the line. 10,055 drawings became
13.5 gigabytes of data. And we used an entire
hierarchy of compression to fit that into
a 150 megabytes. The score attracted
to Stradivarius violins and top
musicians who came because the visual demanded
music of equal beauty. “duet” allowed state of the
art technology, software and hardware engineers, to
breathe new life into an art form almost lost to us. The art of hand
drawn animations. Glen Kean’s art. [MUSIC PLAYING] GLEN KEANE: I see myself as an
artist, first, who animates. Fortunately for me,
everything I’ve animated is always tested me to
learn something new. And I do believe that those
feelings come out in line. In embracing this
new technology, I feel like I re-discovered
a love for animation. RACHID EL GUERRAB:
As a tech team, we had no idea how
this would be executed. None whatsoever. Going from CG, how do
we do that in a way that doesn’t distort what
Glen is drawing? In CG, if something
goes wrong, it’s pretty easy to fix it up and go. In hand drawn, there’s
no way because that has to be redrawn by Glen. To some extent he’s
our rendering engine. He’s the guy who’s actually
producing the frames, we’re just putting
them in a 3D space. Unlike traditional
film, our hardware runs at 60 frames
per second, not 24. We decided from the beginning
that when you see on screen will be Glenn’s
drawings untouched. GLEN KEANE: I’ve spent 40 years
thinking at 24 frames a second. How in the world can
I actually animate at a whole different time rate? And I was like, whoa,
that’s a lot of work. But then I started
thinking, wait a second. This gives you 60
more possible images to describe an action with. Why wouldn’t you want that? This whole experience
has shown me that, whether you’re holding
a pencil or your programming on a keyboard,
you are an artist. It’s going to take both
sides to really move this art form forward to
what it can become. RACHID EL GUERRAB: With
a traditional story, the director holds the camera,
so you know he’ll get it right. But in our medium we never knew. You see somebody watching
“duet” and tearing up. That’s a moment that
you don’t forget. REGINA DUGAN: Ladies and
Gentlemen, Glen Keane. GLEN KEANE: Thank you. Thank you. So, I’m an animator, which
is an actor with a pencil. So I think I better
boot up my device here. Wait for it. Oh, there it goes, OK. So like I said, I’m an
actor with a pencil. And so whether I’m animating a
mermaid or the Beast or Aladdin or Tarzan, I live in the skin
of the characters that I draw. In this case it’s a little baby
girl, which is weird, I know. But I know what I
know what it feels like to hold a little baby
like this in your arms. Just this last week we had
our little granddaughter born. And I know that those
soft little chubby arms with their
little marshmallow hands, what that feels like. And when I draw,
I see my drawing at a sort of– it’s a way
for me to connect to you. I see drawing as a
seismograph of the soul. The lines representing
how I feel. And kind of like the eyes
are the window to the soul. And this is little
Mia from “duet” with a little beauty
mark and a belly button. [APPLAUSE] Thank you. About a year ago, Regina
invited me up to ATAP and she handed me
this mobile device. And said, so what
would you do with this? And I looked at
it and said, well the screen is a lot smaller. I’m used to be a big
movie screen where my animation can play up there. And then I noticed, it
wasn’t a screen at all, but it was a window into an
infinite virtual world where the viewer had the
camera in their hands and it was a seamless
storytelling. It was like there was
no cuts in it at all. it was like unbroken
eye contact. A captivating conversation
between the artist and the viewer. This is wonderful. I said, so Regina, what do
you want me to do with this? And she said, I just
want you to make something beautiful
and emotional. Well this is music to
ears of any artist. So what’s the catch? She said, well,
there is no catch. I just want you to push
yourself creatively, that will push us
technologically. I like this Regina. So a year later, here
we are with “duet.” And the thing I
realized as I think back on that year, working side
by side with Rachid’s team, is that whether you’re an
artist with a pencil expressing yourself creatively or you are
programming on a keyboard, that kind of an artist,
we stand on one another’s shoulders to reach
higher than any one of us could do alone. Later on this year
you’ll see “duet” in all of its virtual
interactive glory. But this morning, we’re
going to present it to you in a theatrical format. And I hope you like it. Here’s “duet.” [MUSIC PLAYING] REGINA DUGAN: I
ask myself what I’d like you to remember about ATAP. We’re a small band of pirates
trying to do epic shit. We’re trying to close the gap
between what if and what is. That we are mobile
first, lean and agile, open, optimized for speed. Yes, all of those things. But what I’d like for you
to remember in the end, most of all, is that ATAP is full
of doer dreamers, like you. Who dare to believe, even
when it means we might fail. It’s terrifying and
hard because it’s authentic and human and scary
to dare and dream and do. But it’s the only thing
that really matters. It’s why I’m here. And I suspect it’s
why you’re here. And I don’t mean here
at I/O. I mean here. It’s why we’re all here. To believe and dream and do. In this respect, I’m certain. We are all the same. Thank you. [APPLAUSE]


Leave a Reply

Your email address will not be published. Required fields are marked *