fbpx
Home Social Media Facebook’s Latest Choreography Artificial Intelligence Is A Dancing Queen

Facebook’s Latest Choreography Artificial Intelligence Is A Dancing Queen

The system is still in the early stages of development. Moving forward, Parikh hopes to train a neural network to generate dances directly based on the input music, without having to perform the search procedure.

Robot or cyborg dabbing on party. Android in dab pose. Cybernetic man with artificial intelligence dance in nightclub techno or electronic music. 3d Robot have bionic face, hands and body.
Share this article with friends

Almost everyone has been involved in dancing, and every culture has in a way or the other been involved in a dance. It could be that you’re only tapping your toe, popping and locking or gracefully pirouetting, it’s one of our oldest forms of expression and, aside from a few feathered exceptions, it’s an exclusively human art.

MORE FROM RAVZGADGET: Grab Everything Apple Announced At Its WWDC 2020 Keynote (Video)

“In this work, we focus on designing interesting choreographies by combining the best of what humans are naturally good at – heuristics of ‘good’ dance that an audience might find appealing – and what machines are good at – optimizing well-defined objective functions,” the team wrote in a study published Tuesday.

This isn’t the first time we’ve tried to teach AIs to dance. In 2016, Swedish Choreographer Louise Crnkovic-Friis and her husband, Peltarion CEO Luka Crnkovic-Friis, trained a recurrent neural network, dubbed Chor-rnn, on 48 hours of Louise’s movements.

The system could not only choreograph new dances, but do so in her specific style. Similarly, in 2017, Wayne McGregor, resident choreographer at the Royal Ballet, teamed with Google Arts & Culture to develop a choreographic artificial intelligence.

It is capable of interpreting his company’s dance style and generating more movements based on the using thousands of hours of video it was trained on. And in 2019, NVIDIA partnered with University of California, Merced to build a deep learning model able to generate new moves in the subject’s style and in time with the music.

However, these systems simply mimicked the movements that they were shown. Sure they used that data to generate new choreography, but it was based on watching humans dance, not from imagining new dances on their own.

What’s more they were limited to the style of the dances and music they had been trained on. You couldn’t expect a model trained on classical music to successfully create funky disco moves. But Facebook’s AI is much more stylistically flexible.

“Instead of sort of trying to mimic the choreography that was already there,” Facebook AI researcher Devi Parikh told Engadget. “We wanted to see if we could discover something novel.”

“We’re trying to see if we can just lay out certain very high level intuitive constraints,” she explained. “All we say is that the final movement should be anchored with the music that was provided as input. We don’t place any other constraints on what exactly the movement should look like.”

MORE FROM RAVZGADGET: Apple TV Will Finally Stream YouTube In 4K When TVOS14 Arrives

Parikh’s team — Purva Tendulkar (Georgia Tech), Abhishek Das (Facebook AI), and Aniruddha Kembhavi (Allen Institute for AI) — trained the AI on 25, 10-second clips from 22 songs of a wide musical variety — everything from traditional African and Chinese melodies to modern rock and jazz.

The computation procedure is pretty straight forward. “It’s a search procedure that, when given an input piece of music, we compute a matrix of representation that tells us which pieces of music at two different points in time are more similar to each other than others,” Parikh explained. “And then, we use the search procedure to find a dance sequence whose matrix represents the same pattern.”

Since the dance movements aren’t based on a human’s, their graphic representations look more like a Winamp visualizer rather than what we’d normally think of as a dance routine. In this experiment dancing took the form of a dot moving back and forth along a single straight line, a series of pulsating waves, or as a crudely rendered stick figure. The team determined whether a routine was acceptable or not by its level of creativity, as defined by one of four baseline measurements.

“Dances where an agent moves predictably (i.e. not surprising/novel) or that are not synchronized with the music (i.e. low quality) will be deemed less creative,” the team hypothesized. The generated dances were then evaluated by Amazon Mechanical Turk (AMT) workers, who were shown a pair of dances and asked which went better with the music, as well as which was more surprising, creative and inspiring.

The system is still in the early stages of development. Moving forward, Parikh hopes to train a neural network to generate dances directly based on the input music, without having to perform the search procedure.

“From an AI perspective, creativity can be thought of as a holy grail, the ultimate challenge of intelligence — something very centric to what makes us human. We can have tools that enhance human creativity, that make the creative process more engaging, more satisfying for humans. I feel like that’s a very powerful use of AI and technology in general,” Parikh concluded.

Share this article with friends
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x