Algorithm creates animations from a written description

The progress that the computers themselves have made in understanding natural language in recent years is clear. One of the experts who tries to combine this progress in understanding natural language with that of computer-generated animations is Louis-Philippe Morency, associate professor at the Language Technologies Institute (LTI) of Carnegie Mellon University.

The scientist is working with his colleague Chaitanya Ahuja on this goal with a new type of neural architecture called Joint Language-to-Pose, or JL2P. With this model, you can benefit simultaneously from written sentences and physical animations.

The same scientist admits that we are not yet at the point where artificial intelligence can make a film from a script, but declares that it is a historical moment “very exciting.” Also because, in addition to the animation of virtual characters, such techniques could also be applied to robots.

The latter can easily be operated with our voice by performing any action, even if it is not pre-programmed, that we want it to perform. For the time being, researchers have succeeded in getting the algorithm to create simple stylized figures, starting with simple expressions such as “A person walks ahead” and continuing with constructions that are a little more difficult, such as “A person takes a step forward, then turns to new” (see also video below).

The aim of the researchers is, of course, to arrive at complex animations and sequences with multiple, even contemporary, increasingly descriptive actions.


See also:

https://www.cmu.edu/news/stories/archives/2019/september/language-into-movement.html

Image source:

https://i.ytimg.com/vi/w4rTQpqEgyY/maxresdefault.jpg

The blackest material ever made

There is talk of blacker material than ever in the press release accompanying a new study published in Applied Materials & Interfaces. The MIT researchers report that they have developed a material that is “10 times blacker than previously reported.”

It is a material made of carbon nanotubes placed vertically. These nanotubes were attached to an aluminum foil etched with chlorine. The result is a film that can reflect 99.96% of incoming light at any wavelength, making this film, in turn, a material that, in our view, looks darker than the darkest black we can imagine.

Currently, the film has been used in a work of art produced by Brian Wardle, an aviation and astronautics expert at MIT, and by the artist Diemut Strebe. It is a yellow diamond covered with this ultra-black film.

Practical applications? According to the researchers, the uses for this material could be many. For example, it can be used in optical indicators to reduce glare or in space telescopes to locate exoplanets that more efficiently hide the light from the stars around which they rotate.


See also:

http://news.mit.edu/2019/blackest-black-material-cnt-0913

https://pubs.acs.org/doi/10.1021/acsami.9b08290

Image source:

https://upload.wikimedia.org/wikipedia/commons/4/46/Vantablack_01.JPG