Scientists develop the robot Chef of the future

Researchers have taught a robotic "chef" to observe and absorb culinary demonstration films while also recreating the meal.

The University of Cambridge researchers trained its robotic chef with a "cookbook" of eight straightforward salad dishes. The robot was able to recognize which recipe was being made and create it after seeing a video of a person making it.

The films also assisted the robot in gradually expanding its cookbook. By itself, the robot created a ninth recipe at the end of the trial. Their findings, which were published in the journal IEEE Access, show how video material may be a useful and rich source of data for automated food production and might make the deployment of robot chefs simpler and less expensive..

Robotic chefs have long been a staple of science fiction, but in practice, a robot would have difficulty reproducing food. While none of these are now commercially accessible, some businesses have developed prototype robot chefs, who are far less skilled than their human counterparts.

Human chefs may learn new techniques by watching others prepare food or viewing videos on YouTube, but teaching a robot to prepare a variety of meals is expensive and time-consuming.

"We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can -- by identifying the ingredients and how they go together in the dish," said Grzegorz Sochacki, the paper's first author from Cambridge's Department of Engineering.

The Bio-Inspired Robotics Laboratory's Sochacki, a PhD student, and his colleagues created eight straightforward salad recipes and recorded themselves preparing them. They then trained their robot chef using a neural network that was made available to the public. The fruits and vegetables included in the eight salad recipes (broccoli, carrot, apple, banana, and orange) were among the things the neural network was already trained to recognize.

The robot examined each frame of the movie using computer vision techniques and recognized the various items and characteristics, including a knife and the ingredients, as well as the human demonstrator's arms, hands, and face. The robot used mathematical operations on the vectors to judge how closely a demonstration resembled a vector. The videos and recipes were both transformed to vectors.

The robot could recognize which of the meals was being cooked by accurately recognizing the components and the activities of the human chef. The robot deduced that the carrot would be sliced up if the human demonstrator held a knife in one hand and a carrot in the other.

Despite only picking up on 83% of the activities of the human chef, the robot correctly identified the proper recipe 93% of the time. The robot could also tell when a recipe had been slightly altered—for example, when you made two servings instead of one or made a typical human error. Additionally, the robot prepared a new, ninth salad after successfully identifying its demonstration, adding it to its cookbook.

"It's amazing how much nuance the robot was able to detect," Sochacki added. These recipes aren't difficult; they simply consist of chopped fruits and vegetables, but it worked incredibly well to recognize, for instance, that a recipe that calls for two chopped apples and two carrots is the same as one that calls for three chopped apples and three carrots.

The culinary videos created by certain social media influencers, which are full of fast cuts and visual effects and swiftly switch between the person cooking the meal and the dish they're preparing, are not like the ones used to teach the robot chef. For instance, if the human demonstration put their hand around a carrot, the robot would find it difficult to recognize it; instead, the human demonstrator needed to hold the carrot up so the robot could view the entire vegetable.

According to Sochacki, "Our robot isn't interested in the kinds of food videos that go viral on social media because they're just too difficult to follow." However, if these robotic chefs get quicker and more accurate at recognizing components in cooking demonstration films, they may be able to use websites like YouTube to learn a wide variety of recipes.

Beko Plc and the Engineering and Physical Sciences Research Council (EPSRC), a division of UK Research and Innovation (UKRI), both contributed to funding the study.

Enjoyed this article? Stay informed by joining our newsletter!

Comments

You must be logged in to post a comment.