GooglePaLMAI research is progressing at a rapid pace and now Google has unveiled its Pathways Language Model (PaLM), the largest language model presented so far.

In a blog post, the Google research team explains that its Pathways Language Model (PaLM) has created an AI that with more than 540 billion parameters is larger than the Megatron Turing-Natural Language Generation Model (MT-NLG) of Microsoft and Nvidia. But most importantly, in various tests, the model was so convincing that it could keep up with human performance.

During the development of the Gopher language model, the researchers at the Alphabet company, Deepmind, realized that simply scaling the model does not necessarily lead to better results. That’s why they have meanwhile developed a much smaller model called Retro, which due to a peculiarity can still keep up with AI that contains up to 25 times the parameters. In fact, Retro can access a database of two trillion pieces of text to look for passages of similar language that could improve its predictions.

The fact is that Google no longer relies on sheer size when developing PaLM, although it is of course huge. The new PaLM system combines the power of AI with a form of multitasking to increase performance. This multitasking ability, which Google calls Pathways, has now been used for the first time to support a language model. The researchers have thus succeeded in creating a model whose considerable quantitative values are not only used for qualitative statement. The PaLM model also performed excellently in the usual tests for assessing the performance of a language model.

In almost all tests, PaLM is said to have left the competition far behind. These were mostly monolingual tests such as simple question-and-answer tests, fill-in-the-blanks, sentence completions, reading comprehension and logical reasoning tests, and tests in which it is important to draw the right conclusions from statements made in natural language. According to Google, PaLM was able to demonstrate abilities in such tests that are at the level of language comprehension for 9- to 12-year-olds.

The system also performed "strongly" when it came to translation tasks. The same applies to demanding learning with comparatively little information, the so-called "few-shot" learning. Again, PaLM could have achieved average human scores on these tests.

Despite pathways and retro, the research team believes that an increase in performance is possible, which can result from pure scaling of the models. This suggests that the PaLM model's 540 billion parameters may soon be exceeded. But of course, this sheer size of AI models can also lead to increasing opacity, making biases in current models increasingly difficult to detect. And this is a serious ethical problem that currently has no solution.

By Daniela La Marca