Producing steel without emitting CO2 is perhaps possible thanks to hydrogen

Producing steel also means emitting a lot of CO2 into the environment. It is estimated, in fact, that the steel industry itself generates between 7 and 9% of CO2 emissions among all those generated through the use of fossil fuels, as noted in a new statement published on the CORDIS website.

Of course, several studies are underway to limit CO2 production when steel is produced, but not many of them have achieved results that suggest real applications. Now a new project, called H2Future and funded by the European Union, aims to discover new energy sources to achieve, albeit gradually, a real decarbonisation of steel production. In this regard, it is planned to use hydrogen as a renewable electricity source.

A pilot plant has already been set up in Linz, Austria, which has a capacity of 6 MW of electricity from renewable sources to produce up to 1200 m³ of green hydrogen. The press release on the launch of this new plant speaks of “an important milestone for the industrial application of electrolysis” in the steel industry, refineries, fertiliser production and other industrial sectors.

The new plant is based on the technique of electrolysis, a phenomenon in which water is divided into hydrogen and oxygen by electric current, as explained in the press release on the project website: “PEM technology works using a proton exchange membrane as the electrolyte. This membrane has a special property: it is permeable to protons but not to gases such as hydrogen and oxygen. This means that in a PEM-based electrolyzer the membrane acts as an electrolyte and separator to prevent the mixing of gaseous products.”

Dragging effect of space-time detected in white pulsar-dwarf system

If a very massive object, such as a planet or star, rotates, it literally drags the surrounding space-time with it. It is a phenomenon predicted by Einstein’s general relativity also known as the “dragging effect” or the Lense-Thirring effect (from the two Austrian physicists Josef Lense and Hans Thirring who in 1918 first derived the effect within general relativity).

This effect also exists on Earth in relation to its proximity to the Sun but in our case, it is extremely small so that it has been very difficult to measure for years. The effect, however, is more pronounced with heavier and more massive objects, such as white dwarfs or neutron stars. And it is by studying a binary system composed of a white dwarf and a pulsar that researchers have found direct evidence of this effect.

The researchers, led by Vivek Venkatraman Krishnan of the Max-Planck-Institut für Radioastronomie, have in fact observed a pulsar characterized by a narrow and fast orbit around a white dwarf that has a mass similar to that of the Sun. The pulsar makes a full circle around this white dwarf in less than five hours, whizzing at a speed of over one million km/h. The two bodies are very close together and less than the diameter of the Sun.

Measuring the timing of the arrival of the very short pulses of the pulsar towards Earth, data found over a period of almost twenty years, the researchers concluded that it is the Lense-Thirring effect that causes a sort of drift, slow and long term, of the way in which the pulsar and the white dwarf orbit around each other.

It is the dragging of the same space-time that causes the orientation of the pulsar to slowly change while it revolves around the white dwarf. Among other things, this new study confirms a hypothesis, contained in other previous studies, according to which white dwarf of this binary system, called PSR J1141-6545, was formed before the pulsar. Such binary systems are considered quite rare.

This study, among other things, could also be useful to understand what is inside a white dwarf: despite decades of research, it is not yet known how matter is arranged inside this very strange cosmic object given the conditions inside it, conditions of very strong gravity to which the same matter is subjected and which are not reproducible in the laboratory.

Artificial intelligence passes the third grade scientific test for the first time

A software based on artificial intelligence has passed an eighth American school test (comparable to the third year of high school), according to an article in the New York Times. It is the first time that artificial intelligence has passed a test of this level.

It’s been a few years since hundreds of computer scientists entered a competition to create artificial intelligence that can pass a test of this level, but the only one to pass seems to have been the Allen Institute for Artificial Intelligence.

In fact, this Seattle Institute has created a new artificial intelligence system that seems to have passed the scientific test by correctly answering more than 90% of the questions. The software, called Aristo, is designed to mimic the logic of human decision-making.

And it is perhaps precisely for this reason that he managed to overcome not only the questions that made a “simple” information search possible (something that even Google can do now if the questions are very simple), but also questions that needed a real reasoning, essentially the classic and simple “problems” that primary or secondary school students have to solve, issues that, however, require the use of logic.

The standardized scientific tests used in schools are increasingly being used to assess the level of artificial intelligence and the manufacturers themselves see them as excellent benchmarks for understanding the progress and level their software achieves. These types of tests are considered more important than the classic tests based on games such as chess or backgammon.

The latter may, in fact, be governed by the rules to learn, but a scientific test, a series of questions that also includes the use of logic, is more difficult to overcome. Jingjing Liu, one of the Microsoft researchers who has also worked on various Allen Institute initiatives based on artificial intelligence, seems to be cautious and openly declares that it is not yet possible to compare such technology with real human students of the third degree: their ability to reason, at least for the moment, is still superior.

However, the progress that has been made with Aristo can already be used in the short term in a range of different services, ranging from the answers that an Internet search engine can provide to the various tasks that a digital assistant can perform. However, the progress made in artificial intelligence, especially in neural networks that can understand the natural language thanks to models built on the basis of huge amounts of data, does not seem to deny it.

See also:

Image source:*DcHlT-ImdvYaJZL7LWDUUA.jpeg

Algorithm recognizes bullies and molesters on Twitter with an accuracy of 90%

A new algorithm that recognizes bullies and online attackers has been developed by a group of researchers from the University of Binghamton. Specifically, the researchers developed an algorithm that, with an accuracy of 90% according to the press release, recognizes the bullies on Twitter.

More and more IT and Artificial Intelligence laboratories and researchers are spending their time trying to develop methods for automatically recognizing bullying and aggression on the internet, in order, quite clearly, to benefit even large companies, they keep the social networks that, at least for the time being, mostly use human moderators.

Jeremy Blackburn, an American university computer scientist, is trying to bridge this gap by analyzing the behavioral patterns of “bullies” on Twitter and comparing them to those of “normal” users. It is precisely for this reason that the researcher, together with his colleagues, has created special crawlers to collect data from Twitter faster and more efficiently.

He then relied on natural language processing algorithms and other tools already available for social network analysis and was able to develop an algorithm that automatically classifies two models of offensive behavior online: cyberbullying and cyber aggression.

The accuracy of the algorithm would be 90%. The algorithm is able to identify tricky behavior, for example users who launch threats make racist comments to other users.

See also:

Image source:

Researchers train artificial intelligence to recognize smells

A team of Google Brain researchers published a new study on arXiv in which they explain how they train artificial intelligence software to recognize smells. They first created a set of more than 5,000 molecules and then labeled these molecules with descriptions that identified the type of odor.

The researchers used a special artificial intelligence called graphical neural network (GNN) so that these molecules were associated with their descriptions based on their structures. This is not a software that can be compared to the sensitivity of the human sense of smell, because the latter is very difficult to define. For example, there are scents that can appear in one way for one person and in another.

Moreover, some molecules sometimes have the same atoms and the same bonds, but they are arranged as mirror images: these molecules, usually recognized by the same software as practically the same, can have completely different smells. And this without mentioning the fragrances that are the result of the fragrances combined.

Despite these undeniable difficulties, Google researchers think that this is an important first step, a step that can also be useful in the fields of chemistry, sensory neuroscience and the production of synthetic fragrances.

This is not the first team of researchers to attempt to mimic or imitate the characteristics of an olfactory system based on artificial intelligence. For example, a team of scientists from the Barbican Centre in London used machine learning techniques to “recreate” the smell of an extinct flower. In addition, IBM is conducting experiments to create new fragrances generated by artificial intelligence.

See also:

Image source:

New material made of cellulose and silk can replace plastic

A group of researchers from Aalto University and the VTT Technical Research Centre of Finland created a new biobased material made from wood pulp fibres and spider silk proteins. The end result is at the same time a strong and expandable material that, according to the press release presenting the research, could also be a replacement for plastic in the future.

Because of its intrinsic properties, the same material can also be used in medical applications, in the textile industry and in the packaging industry. The advantage of such material over plastic is clear: it would be biodegradable and would not damage nature as plastic does.

In order to carry out their experiments, the researchers used birch pulp. They converted this substance into cellulose nanofibrils and then placed it on a kind of hard scaffolding. They then placed a matrix of very soft spider silk. It is not silk of real cobwebs, but still of biological silk, because it is produced in the laboratory with the help of bacteria with synthetic DNA.

In essence, knowing the structure of silk DNA, it can be built almost from scratch. In addition to the qualities of this material, this study shows the new possibilities of protein technology.

As specified by Pezhman Mohammadi, VTT researcher and one of the authors of the study together with Markus Linder: “In the future, we could produce similar compounds with slightly different building blocks and achieve a different set of functions for different applications.” The same researchers are also working on other projects to build their own materials with similar methods.

See also:

Image source:–/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyNDI7aD02OTguNjgxNDU0NTQ1NDU0Ng–/–~B/aD0zMDk0O3c9NTUwMDtzbT0xO2FwcGlkPXl0YWNoeW9u/