10th October 2025
Watch the DRS Model Workers 30th Anniversary Film
The article below was originally published by the fantastic Thinkfarm on Linkedin.
“Do you have the bandwidth to help us?” said the innocuous WhatsApp message that arrived at the start of September. It went on; “Our anniversary party is on Friday 26th. We haven’t had a chance to put our heads together yet to be honest, but the experience and skills you guys bring are extremely good value”. It was that last bit that did it. Despite the tightness of the deadline and inexactness of the brief, we knew that whatever we did with this client would involve a lot of fun. And we couldn’t refuse such flattery either, could we? So, amid the warning sirens, we started throwing ideas around… that very evening.
The concept that emerged was a teeny bit ‘Art School’ in feel. Based on a scale model of a film set that DRS built last year, we set off to create a short film story that brought the model to life. They had populated it with a dummy crew of action-men figurines, ‘model workers’ that served perfectly as a metaphor for the energetic, can-do approach that DRS bring to their work. We’d mix live action of their own, real life-sized crew – in a full-scale set made by DRS – with stop-frame animation of the model. Surreal cartoon effects butted up against filmed buffoonery. The final pay-off would be that anything you thought had happened in reality, had only happened in miniature.
But animating miniature dummy figures isn’t simple. The dolls require properly articulated limbs, manipulated by stop-frame experts. Shooting often has to take place on a controlled animation rig in a process which really takes time and involves vast amounts of shot-planning – both for the animation and any live-action material that it’s meant to match. As our scripting got more detailed, with surreal gag ideas multiplying and the deadline getting closer, we realised that we’d need to think differently if we were going to make it all come together in time. Martin Roach, our Director of Photography on the project, was keen to start finding out if his own experiments in AI could bear fruit. We’d been also experimenting with Generative AI scripting ourselves all summer – on a variety of self-initiated projects in a kind of R&D exercise. It soon became clear that this was the perfect commercial project to employ the technology for the first time.
The two-day shoot arrived within a week of the decision, with hardly enough time to bring a freelance production team together, never mind hammer out a detailed script and storyboard. But with plenty of room at Garden Studios in Acton, we were able to time-lapse record DRS’s full-scale build, set up cut-away scenes and a model shoot in separate areas of the space, while shooting the main live action scenes with the DRS crew, all at the same time. Using a hand-held stills camera on ‘shutter burst’ mode enabled us to record the live sequences in a way that mimicked the feel of stop motion animation.
In a separate area, a first for any shoot we’d run, we brought along our own editor to stitch the live action story together as we shot, alongside our own Zoe Prosser working with Runway, the premier AI film generating platform, to prompt and create AI scenes from the model shoot and the live footage. With this set up we could generate animations almost as quickly as they were dreamt up, see how ideas worked in context, and decide what and how to shoot the next material. It allowed us to experiment with the loose storyboard as we worked and meant that DRS could contribute to the process themselves, adding ideas and useful contextual props made by their own art department, on the fly. It all made for a thrilling, agile way of working, that was completely collaborative.
By the end of the second day in the studio, we already had a very usable first-draft edit of the movie. It was so close to being ‘finished’ that DRS would have been happy to show it at their anniversary event, although we agreed the AI sequences needed a day or two more. Whatever fine tuning was needed, we’d learned something valuable; It really works to combine strands of production – live action filming, stills photography and real time editing – when you’re using Generative AI as the glue to weld ideas together.
The Model Workers film premiered as planned at DRS’s 30th Anniversary event. It went down a storm with its audience, with Co-Director Jono Moles describing it as “a love letter to the craft of set building and the makers who pour their passion into every project”. We’d happily go along with that as an accolade to the project itself.
The last time that rapid feedback speeded up the creative process like this was some 40 years ago, right at the start of Thinkfarm’s journey as a creative agency, just when the very first image-making computers started popping up in small studios, revolutionising the way ideas and editing options could be seen, rejected and developed. We’re certain that the rate of change we’re about to experience is going to be much faster than it was then. As a result of the sheer processing power available, the relationship between concept and realisation is also likely to alter radically, but the same underlying possibilities are there. The ability for clients to see, understand, be involved, reassured and satisfied with the outputs of our processes should be viewed as another step forward too.
We’ve learned a lot in the space of three weeks. It’s premature to suggest that we’ll adopt such a radical shift in planning and production processes for every branding and campaigning project we carry out. It’s clear that there can be considerable savings, both in terms of time and in the ability to circumnavigate complex workflows, but whether there’ll be huge cost-savings remains to be seen. For us, the timing of an open-ended, collaborative, time-poor brief couldn’t have been better for testing an experimental process like this.
When we embarked on our AI journey on this project, we’d made a conscious decision to explore what making quirky, surreal, oddball sequences could offer. We were free from the wider debate and concerns about recreating realistic images that bend the meaning of ‘truthful’ representation. With current output limitations of much Generative AI set at a relatively low resolution (720 pixels – which warps somewhat when upscaled to even standard HD/2k files), the fact that the look and feel of our film could remain playful, crude in execution, with visual mistakes evident, gave us a freedom that more technically exacting or risk-averse projects wouldn’t.
Nor should we ignore the value of having access to pre-built models and full-scale sets, complete with miniature and full-sized props, all willingly made by DRS. It provided an amazing amount of unique, contextual material to feed into our AI prompts and it’s clear to us how crucial such material is. Without it, the outputs from even the most carefully contrived prompts are often unusable, unengaging or just plain weird. For certain shots or takes, it took more than 35 prompting sessions to achieve something that felt ‘right’ for the story – even when the source material at our disposal included a huge range of real-life characters, scale dummies, high quality scenery, models and props.
Maybe the biggest lesson we take away from the process, though, is that the creative ballpark has just shifted considerably. We shouldn’t and mustn’t be dismissive about, intimidated by, or ignore this. Rather, to get the best out of Generative AI, we’re embracing it – so we can pursue the creative possibilities it opens up.