• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Engineering Designer Magazine

Engineering Designer

  • Home
  • Technology
  • Education
  • Sustainability
  • Materials
  • Medical
  • Construction
  • Advertise
  • iED
You are here: Home / Technology / Rethinking design creativity through generative AI

Rethinking design creativity through generative AI

June 12, 2025 by Geordie Torr

A new study carried out by a research team from Imperial College London, the University of Exeter, and Zhejiang University has explored how different types of generative AI outputs – text, image and three-dimensional – affect the generation of innovative design concepts through combinational creativity, where novel ideas are sparked by merging unrelated elements. The findings reveal a distinct division of labour: large language models are useful in early idea generation, while image and 3D models come into their own in visualisation and prototyping. These insights offer a roadmap for designers and developers seeking to use AI more strategically across the creative workflow.

Creativity often emerges from the interplay of disparate ideas – a phenomenon known as combinational creativity. Traditionally, tools such as brainstorming, mind mapping and analogical thinking have guided this process. Generative AI introduces new avenues: large language models (LLMs) offer abstract conceptual blending, while image (T2I) and T2-three-dimensional (3D) models turn text prompts into vivid visuals or spatial forms.

Advertisement

Yet despite their growing use, little research has clarified how these tools function across different stages of creativity. Without a clear framework, designers are left guessing which AI tool fits best. Given this uncertainty, in-depth studies are needed to evaluate how various AI dimensions contribute to the creative process.

The new research has tackled this gap. The study investigates how generative AI models with different dimensional outputs support combinational creativity. Through two empirical studies involving expert and student designers, the team compared the performance of LLMs, T2I and T2-3D models across ideation, visualisation and prototyping tasks. The results provide a practical framework for optimising human-AI collaboration in real-world creative settings.

Advertisement

To map AI’s creative potential, the researchers first asked expert designers to apply each AI type to six combinational tasks – including splicing, fusion and deformation. LLMs performed best in linguistic-based combinations such as interpolation and replacement but struggled with spatial tasks. In contrast, T2I and T2-3D excelled at visual manipulations, with 3D models especially adept at physical deformation.

In a second study, 24 design students used one AI type to complete a chair design challenge. Those using LLMs generated more conceptual ideas during early, divergent phases but lacked visual clarity. T2I models helped externalise these ideas into sketches, while T2-3D tools offered robust support for building and evaluating physical prototypes. The results suggest that each AI type offers unique strengths, and the key lies in aligning the right tool with the right phase of the creative process.

Advertisement

‘Understanding how different generative AI models influence creativity allows us to be more intentional in their application,’ said Professor Peter Childs, a design engineering expert at Imperial College London. ‘Our findings suggest that large language models are better suited to stimulate early-stage ideation, while text-to-image and text-to-3D tools are ideal for visualising and validating ideas. This study helps developers and designers align AI capabilities with the creative process rather than using them as one-size-fits-all solutions.’

The study’s insights are poised to reshape creative workflows across industries. Designers can now match AI tools to specific phases – LLMs for generating diverse concepts, T2I for rapidly visualising designs and T2-3D for translating ideas into functional prototypes. For educators and AI developers, the findings provide a blueprint for building more effective, phase-specific design tools. By focusing on each model’s unique problem-solving capabilities, this research elevates the conversation around human–AI collaboration and paves the way for smarter, more adaptive creative ecosystems.

The research has been published in Design and Artificial Intelligence.

Filed Under: Technology

Primary Sidebar

SUBSCRIBE And get a FREE Magazine

Want a FREE magazine each and every month jam-packed with the latest engineering and design news, views and features?

ED Update Magazine

Simply let us know where to send it by entering your name and email below. Immediate access.

Trending

New fuel cell can stabilise power grid by making and storing energy

Engineers shake tallest steel-framed building ever tested on an earthquake simulator

New alloy could enable exoplanet discovery

University of Bath crowned winners of the IMechE UAS Challenge

Engineering student team wins People’s Prize in global design competition

Tiny new laser can measure objects at ultrafast rates

New methodology for 3D braiding machine design unveiled

Sellafield engineers win IChemE Young Engineers awards

Novel green hydrogen production pilot launched in Australia

Foster + Partners to design national memorial to Queen Elizabeth II

Footer

About Engineering Designer

Engineering Designer is the quarterly journal of the Insitution of Engineering Designers.

It is produced by the IED for our Members and for those who have an interest in engineering and product design, as well as CAD users.

Click here to learn more about the IED.

Other Pages

  • Contact us
  • About us
  • Privacy policy
  • Terms
  • Institution of Engineering Designers

Search

Tags

ied

Copyright © 2025 · Site by Syon Media