AI developers can turn any webcam into a Kinect
Meet Skeletron, a DIY motion-tracking system that behaves like a Microsoft Kinect
Two AI developers have turned a simple webcam into a system that behaves just like a Microsoft Kinect using machine learning, which proves you don’t need expensive equipment to bring motion-tracking tech to your living room.
Called Skeletron, the system was created by developer and artist Or Fleisher and software engineer and interaction designer Dror Ayalon. To bring it to life they used a webcam that cost only $10, TensorFlow – Google’s open-source AI platform, and game development platform Unity.
On YouTube Ayalon writes: “Skeletron is a system that predicts joints and human skeleton position in 3D from real-time video taken by any (cheap) RGB camera, such as a webcam.”
“The system sends the data about the position of the human body to Unity, a 3D game development engine, to allow engineers, artists, and creative technologists to use it to develop digital experiences.”
At this stage Skeletron may just be a cool experimental project. But it’s refreshing to see a motion-tracking system that could have a whole load of applications being put together for so little money without any sensors – especially considering the Microsoft Kinect cost at least 10 times as much and required a considerable amount of set-up time.
Despite the fact that Microsoft pulled the plug on the Kinect last year, it’s clear the tech has inspired a bunch of other applications that put motion-tracking to good use – most notably, the Face ID in the iPhone X, which is no surprise given that Apple bought PrimeSense, the company that licensed the hardware design and chip used in the Kinect, back in 2013.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.