Embracing AI imagery in Virtual Production

Virtual Reality
Artificial Intelligence
Content Marketing
Social Media Content

“We’ve been experimenting with AI visual content generators such as MidJourney, Dall-E and WonderAI for over a year now. It’s been incredible to see how the creative community and our in-house creative team have been integrating this medium into production workflows.”

“This prompted us to review our use of AI across the entire content creation pipeline, from ideation to post-production. In video, VP, animation, stop motion, and photography.”  – James Pierechod.

So, in the creative team, we’ve been looking at how we can apply Generative AI to create photorealistic backdrops that would be unattainable without ‘mega bucks’ license budgets, days of set building, CGI artists, or scouts searching for that perfect location.

Particularly when it comes to developing the contextual backgrounds in our Virtual Production pipeline. Our VP rigs allow us to use imagery, video, and Unreal Engine CGI scenes within traditional photography and video production processes – all shot in realtime.

So, these AI backgrounds are the fastest way of creating bespoke ‘lifestyle context’ in any product content. We often need to create or illustrate very niche spaces to hero our clients’ products and represent their brand, and these are sometimes the most difficult to achieve with relatable results.

We’ve been refining and running prompts to get the best results. Two months back, the previous version of the software (V3) was unable to create such visually accurate images, and two months before that (V2) was a completely different game again! It means we’re changing how we innovate, we’re not dismissing the ‘ridiculous’ ideas anymore. We’re simply setting them aside until the next iteration can answer that need.

What’s amazing with these use cases is the “speed to iterate”. These contexts weren’t impossible before, they were just created by printing PSD backgrounds, designing studio sets, building in CGI, or shooting on location. However, all of these options would have required days of planning and prep. Using AI we’re creating the above content in the same day and same studio.

 

But, how much can we influence these AI backgrounds? 

Composition: Frame it up a bit… Wider a bit… Change the time of day. 

There’s a thought with AI that it removes the creative from the process. When actually, it’s only by having a creative working on everything around the image that you get the best results from using it. The key to these looking believable is good, realistic lighting. And by ‘good’ we mean consistent and directional interaction between layers, objects, and atmosphere of the image. Matching the temperature, intensity, and quality of light throughout the scene (including how it reacts with materials and opacities).

We’re used to working with lifestyle contexts which are aligned to a brand’s target audience, product and campaign message. These looks are often signed-off and built in advance of a shoot, which means some of the attributes of the scene aren’t flexible.

So, our AI pipeline needs to have a focus on ‘iteration’ too – we’re using MidJourney alongside other AI tools like Stable Diffusion to amend aspects of the generated scene fast!  

A little extra detail in the prompts allows us to get more specific, by adding phrases such as “bright, mid-day sunlight floods in from the left”. Even requesting the same image but “taken from a low POV” yields great results, particularly when shooting tabletop like we have above.

 

Looking to future visual content trends we can also start to explore other worlds with AI… Surreal, colourful graphic product photography and videos where 90% of the content is caught in-camera. Again, saving heaps of time and resource for a brand’s campaign. No brainer!

 

This is truly changing the way we work. It’s not restricting or removing our creative input, nor is it a threat to branded content production. It’s just changing how we think.

The fact both of these projects were completed by one creative in under 8 hours is insane, when compared to the time, costs and efforts involved in traditionally producing a suite of content like this. 

 

Bring on V5…

Things we tested (and discarded) three months ago, are now working in our production environments. It’s changed the way we evaluate and approach innovation. We’ve needed to track and iterate our thinking, document, and ‘flow’ everything through logic gates – because you never know when an idea or avenue will work its way into opportunity. We’re can’t wait to see where we can take this next…