Every year at MAX, we get to look into a crystal ball to see what tomorrow’s technology, especially as it relates to Adobe, may hold for us as technologists and creatives. MAX Sneaks are technology demonstrations and experiments that generally make waves beyond the walls of Adobe MAX itself. Like last year, when a form of “Photoshop for audio” called VoCo (which allows you to feed 20 minutes of audio into the software in order to be able to output any NEW dialog from that) was demonstrated — and ended up making interesting waves as a concept in the surge of “fake news” that emerged in 2016.
Since Sneaks are purely tech demos that don’t currently exist in any Adobe software — but may, someday, if demand and research progress align — the possible use cases haven’t been widely explored yet. And that’s part of why Adobe uses MAX as a platform to explore this future tech. This year 11 sneaks were offered up that covered automation of infographics and colorization, a new approach to working with digital color as paint, and content-aware fill for video and much more. I got a chance to take an early look at four of them and talk to the scientists and researchers involved.
Lincoln seeks to change the way designers are able to create charts and data visualizations, and if you've ever tried to make an infographic you're throwing a ticker tape parade. The origin of data visualization is Excel for charts, of course: we’ve all dealt with charts and pivot tables, and various other forms of data hell. There are a lot of infographics and charts that are created all the time for web and print and video, but they’re generally difficult and time-consuming unless you want them to be boring — there’s no obvious way to approach designing advanced charts like this because it’s really an advanced combination of programming and graphic design. There are exploratory tools and explanatory tools that exist for this, with just a little bit of cross over and not much more.
You can make this graphic with Lincoln in like 1 minute, wtf.
To tell a bigger story with data than a list of information or simple charts, you need a lot of time. Lincoln removes all that. You can bring in a spreadsheet — for example, of swimmers, their gender and country and the time they took to complete a swim — and Lincoln allows you to create visualizations of that data in a tactile way, binding it to the spreadsheet of information. Between binding information to different parameters and having access to assets that can be easily dropped in and anchored, an infographic emerges in minutes. Like, literally a few minutes for a huge visualization.
And since it’s still merely an experiment, the possible uses and applications are limitless. Interactivity, animations, everything.
Say you’re an artist and you’ve got an image in black and white you don’t have time to colorize. Maybe a portrait, or even an old photo of a grandparent. In Scribbler, you hit a button and Adobe Sensei will colorize the sketch instantly. To make this work, a neural network was trained to look at faces and people, and learned how to identify people and colorize them correctly, also accounting for different kinds of skin tones. There is some ability for users to choose different colors and make adjustments too.
It’s a really magical experience to hit a single button and see your photos come to life in a very convincing way. Textures can also be applied to the drawing and Scribbler will use those hints to colorize it as well.
Possibly even more useful on a day to day basis, drawings can be colorized instantly too — for sketches of cartoons or proofs of concept for clients. There are many use cases for art directors or illustrators who are creating spec work and want to move through the colorizing portion of their work very quickly, using it as a starting point. For me, I deeply enjoy seeing old photos of people who never had color photography come to life.
So, you shoot a video on vacation and you get a great shot of your friend or an awesome building, but it’s got some annoying thing in it — a random person standing in the way, or a street lamp blocking the amazing architecture — you know, the worst thing that always happens. You could take a single frame into Photoshop and use Content Aware Fill to remove the offending image frame by frame if you want, but it super sucks when you play it one after another.
What if Content Aware Fill could work for video? Cloak does just that, and it’s nuts. In the Sneaks demo, a polygon was drawn around the object in After Effects. The mask was rendered separately and both the mask and video were input into Cloak as separate assets.
The net result? The offending item is FRICKING GONE. Weirdo people? Gone. Stains on shirts? Gone. Logos, awkward straps, even people who are originally the primary focus of the shot? All gone. Erase everyone and everything from your video. The technology demo literally did just that. You are a god with Cloak.
When you’re oil painting, you can grab colors and mix them and have them all sitting in front of you in different ways. But digitally, you don’t have the same experience. You can’t blend the colors on a color wheel, or experiment with them any more than an eyedropper allows. Research teams explored these issues with many different artists and found how annoying this is to them.
Playful Palette brings traditional palette experiences to digital. For example, you can start with a skin tone in an oil painting and tap on it to add it to a digital palette dish. Then you can add a few other colors from your color wheel. But then? You can mix them together in different fluid ways — or even un-mix them. Once you’re happy with the mix, you can freeze it, select a color, and start painting. Swatches are created around the dish so you can always go back and grab the color again or adjust it for a new shade. And if you get to a point in your painting where you wish you had used a different color, instead of setting fire to the painting physically and starting over, you can actually adjust that color independently even though it’s been used already.
I'm a big fan of Physicspak.
Other sneaks included Scene Stitch, which remove parts of a scene Content Aware Fill can't properly fill and replaces them with semantically appropriate objects from another library; Physicspak, which fills a space with graphics - like when you need a shape filled with certain types of graphics, it calculates the size and shape of all the filler graphics; Sonicscape, which allows a tactile and visual approach to arranging 360 audio inside 360 video; Sidewinder, which provides depth to 360 video; Quick3D, which searches for 3D models based on crude drawings; Puppetron, which applies machine learning to remixing images to apply styles to facial photographs; and Deepfill, which fills in gaps within incomplete images using synthesized image patches.
This is my first MAX, and I naively looked at the map and my schedule and said "ah, this will be easy. It's like a fraction of the size NAB is, so I won't die!"
MAX starts early, ends late, and now 11,000 steps later I feel I did not emotionally prepare.
But don't let me whining fool you: it was the bomb so far. I've got a couple of cool stories in the works and I'll definitely have some fascinating stuff to share later today after MAX Sneaks -- the session where Adobe shows off new tech in progress/shows us resistance is futile -- but for now, I wanted to tell you the best things people said.
Between Adobe Spark and Dimension offering new tools I didn't even know I needed, and the focus on immersive technology and machine learning through Adobe Sensei and the new VR tools in Premiere and After Effects, a lot of the conversation was focused on how to continually remove the technical barriers between humans and machines.
"Artificial intelligence will evolve and learn to harness the entire creative community, anticipating what you want to do so you an have the freedom to focus on creativity....When you put put art and science to work, magic can happen." -Shantanu Narayen, CEO, Adobe
Because of the increased accessibility to tools, "user expectations have changed....We have a generation that has grown up on mobile devices. We're taking into consideration the expanding needs of design." -- Jamie Myrold, VP of Design, Adobe
Adobe Sensei will "amplify human creativity and intelligence." - Bryan Lamkin, Executive Vice President & General Manager, Digital Media, Adobe
"Is VR dead because AR is big? I hope VR is dead because it’s going to kill me." - Matt Lewis, Practical Magic (he was joking)
"I just started doing yoga and part of that is learning how the mind and body are two different entities and being a person is a collaboration of both. When you’re watching a film, you’re usually also kind of thinking about other things while your body is sitting there. In VR, you’re completely involved as a whole person. Because this is all contained in one environment, it becomes the ultimate teaching tool. What you teach is up to your imagination. From there it’s about making sure those tools are accessible, so you can teach many things to different kinds of people." - Olivia Peace, 2017 Sundance Ignite Fellow
"I want the story to connect with other humans. This is taking it to another level. It’s less about interpretation and more about feeling. We want one human to tell a story to another human with no technical barrier." - Stefano Corazza, Sr Director, Engineering, Adobe
Posted by: Kylee Peña on Oct 19, 2017 at 3:54:50 pm