Raffaello
D'Andrea

RoboEarth

RoboEarth cre­at­ed a shared “Wikipedia for robots”: a cloud-based knowl­edge base where robots world­wide could store maps, object mod­els, and task recipes, learn from each oth­er, and col­lec­tive­ly improve—achievements made pos­si­ble through a pan-Euro­pean research con­sor­tium I co-led.

A World Wide Web for Robots

The idea for RoboEarth began with a les­son from Kiva Sys­tems. At Kiva, our robots sent what they learned to a cen­tral serv­er, which assim­i­lat­ed that data into its exist­ing knowl­edge of the world before mak­ing the updat­ed infor­ma­tion avail­able to the entire fleet. This was in 2004, when cloud com­put­ing was still in its infancy—two years before Ama­zon launched its first cloud services—and it offered a glimpse of what could be pos­si­ble if that mod­el were extend­ed across many sites and robot types. I began to won­der: what if that prin­ci­ple could be scaled far beyond a sin­gle ware­house? Could robots in dif­fer­ent environments—labs, hos­pi­tals, factories—share what they learned so that oth­ers could ben­e­fit imme­di­ate­ly? Could we build, in effect, a shared brain for robots?

In 2009, ETH Zürich and Tech­nis­che Uni­ver­siteit Eind­hoven set out to answer that ques­tion, co-lead­ing an EU-fund­ed research pro­gram under the Sev­enth Frame­work Pro­gramme (FP7). Our part­ners includ­ed Philips, Uni­ver­sität Stuttgart, Uni­ver­si­dad de Zaragoza, Tech­nis­che Uni­ver­sität München, and Uni­ver­sität Bre­men. Togeth­er, we cre­at­ed RoboEarth: a glob­al, web-style data­base allow­ing robots to upload maps, object mod­els, and step-by-step “recipes” for tasks, and enabling oth­ers to down­load, adapt, and exe­cute them in new environments.

Over the next five years, we built the infra­struc­ture and proved the con­cept through live demon­stra­tions. One ear­ly test had a robot learn to serve a drink, upload the instruc­tions, and watch as a dif­fer­ent robot—on a dif­fer­ent platform—downloaded and per­formed the task suc­cess­ful­ly. In the final show­case, four het­ero­ge­neous robots worked togeth­er in a mock hos­pi­tal, each draw­ing on RoboEarth’s shared knowl­edge to nav­i­gate, manip­u­late, and col­lab­o­rate in real time.

At the time, the term cloud robot­ics didn’t even exist—it wasn’t coined until 2010 by researchers at Google—yet RoboEarth was already putting the con­cept into prac­tice. We offloaded com­pu­ta­tion to the cloud, shared per­cep­tion and task data across plat­forms, and treat­ed knowl­edge as a glob­al, evolv­ing resource rather than some­thing locked inside each machine.

While RoboEarth offi­cial­ly con­clud­ed in 2014, its ideas and design pat­terns helped set the foun­da­tion for the cloud robot­ics move­ment. One tan­gi­ble con­tin­u­a­tion was Rapyu­ta, a cloud robot­ics engine devel­oped dur­ing the project’s final phase, which evolved into a com­mer­cial plat­form through Rapyu­ta Robotics—a com­pa­ny that has raised over USD 80 mil­lion and deployed cloud-con­nect­ed autonomous mobile robots, fork­lifts, and ware­house automa­tion sys­tems in live indus­tri­al settings.

Since RoboEarth first demon­strat­ed the val­ue of col­lec­tive robot learn­ing and cloud-based intel­li­gence, the cloud robot­ics mar­ket has steadi­ly expand­ed. Ana­lysts esti­mate its 2024 size at rough­ly USD 7–8 bil­lion, with pro­jec­tions rang­ing from USD 35 bil­lion to USD 55 bil­lion by 2033, imply­ing annu­al growth rates between 18% and 26%. While today’s deploy­ments in logis­tics, health­care, and man­u­fac­tur­ing are shaped by many fac­tors, they share a cen­tral prin­ci­ple that RoboEarth proved in prac­tice: robots con­nect­ed through the cloud can adapt faster, per­form bet­ter, and ben­e­fit from each other’s experience.