MIT researchers have developed an aerial microrobots that can fly with speed and agility comparable to that of their biological counterparts. A collaborative team designed a new AI-based controller for the robotic insect that enabled it to follow gymnastic flight paths, such as executing continuous body flips.
With a two-part control scheme that combines high performance with computational efficiency, the robot’s speed and acceleration increased by about 450 per cent and 250 per cent, respectively, compared to the researchers’ best previous demonstrations. The robot was also agile enough to complete 10 consecutive somersaults in 11 seconds, even when wind disturbances threatened to push it off course.
‘We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate,’ said Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science.’ Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal.’
Chen’s group has been building robotic insects for more than five years. They recently developed a more durable version of their tiny robot, a microcassette-sized device that weighs less than a paperclip. The new version utilises larger, flapping wings that enable more agile movements. They are powered by a set of squishy artificial muscles that flap the wings at an extremely rapid rate.
But the controller – the ‘brain’ of the robot that determines its position and tells it where to fly – was hand-tuned by a human, limiting the robot’s performance. For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and perform complex optimisations quickly.

Such a controller would be too computationally intensive to be deployed in real time, especially with the complicated aerodynamics of the lightweight robot. To overcome this challenge, Chen’s group joined forces with researchers from the lab of Jonathan P How, the Ford professor of engineering in the Department of Aeronautics and Astronautics, and together they crafted a two-step, AI-driven control scheme that provides the robustness necessary for complex, rapid manoeuvres, and the computational efficiency needed for real-time deployment.
‘The hardware advances pushed the controller so there was more we could do on the software side, but at the same time, as the controller developed, there was more they could do with the hardware. As Kevin’s team demonstrates new capabilities, we demonstrate that we can utilise them,’ How said.
For the first step, the team built what is known as a model-predictive controller. This type of powerful controller uses a dynamic, mathematical model to predict the behaviour of the robot and plan the optimal series of actions to safely follow a trajectory.
While computationally intensive, it can plan challenging manoeuvres such as aerial somersaults, rapid turns and aggressive body tilting. This high-performance planner is also designed to consider constraints on the force and torque the robot could apply, which is essential for avoiding collisions.
For instance, to perform multiple flips in a row, the robot would need to decelerate in such a way that its initial conditions are exactly right for doing the flip again. ‘If small errors creep in, and you try to repeat that flip ten times with those small errors, the robot will just crash. We need to have robust flight control,’ How said.
They used this expert planner to train a ‘policy’ based on a deep-learning model, to control the robot in real time, through a process called imitation learning. A policy is the robot’s decision-making engine, which tells the robot where and how to fly.
Essentially, the imitation-learning process compresses the powerful controller into a computationally efficient AI model that can run very quickly. The key was having a smart way to create just enough training data, which would teach the policy everything it needs to know for aggressive manoeuvres. The AI-driven policy takes the robot’s positions as inputs and outputs control commands in real time, such as thrust force and torques.
In their experiments, this two-step approach enabled the insect-scale robot to fly 447 per cent faster while exhibiting a 255 per cent increase in acceleration. The robot was able to complete ten somersaults in 11 seconds, and the tiny robot never strayed more than four or five centimetres off its planned trajectory.
The researchers were also able to demonstrate saccade movement, which occurs when insects pitch very aggressively, fly rapidly to a certain position, and then pitch the other way to stop. This rapid acceleration and deceleration help insects localise themselves and see clearly.
‘This bio-mimicking flight behaviour could help us in the future when we start putting cameras and sensors on board the robot,’ Chen says.
Adding sensors and cameras so the microrobots can fly outdoors, without being attached to a complex motion capture system, will be a major area of future work. The researchers also want to study how onboard sensors could help the robots avoid colliding with one another or coordinate navigation.
‘For the micro-robotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is high-performing and efficient at the same time,’ said Chen.
The research has been published in Science Advances.


