Artificial city

Google I/O 2022: Google’s breakthroughs in artificial intelligence and machine learning, explained

Google, which held its I/O Developer Conference 2022 late Wednesday, Doubled on the development of artificial intelligence (AI) and machine learning (ML). It not only focuses on research, but also on product development.

One of Google’s areas of work is to make its products, especially communication ones, more “nuanced and natural”. This includes the development and deployment of new language processing models.

Take a look at what the company announced:

AI test kitchen

After launching LaMDA (Language Model for Dialog Applications) last year, which allowed Google Assistant to have more natural conversations, Google announced LaMDA 2 and the AI ​​Test Kitchen, which is an application that will give access to this model to users.

The AI ​​Test Kitchen will allow users to explore these AI features and give them an idea of ​​what LaMDA 2 is capable of.

Google launched the AI ​​Test Kitchen with three demos – the first, called “Imagine It”, allows users to suggest a conversation idea and Google’s language processing model then returns with “imaginative and relevant descriptions” on the idea. The second, called “Talk about it”, ensures that the language model stays on topic, which can be a challenge. The third template, called “List It Out”, will offer a potential list of to-dos, things to keep in mind, or pro tips for a given task.

Pathways Language Model (PaLM)

PaLM is a new model for natural language processing and AI. According to Google, this is their largest model to date, and trained on 540 billion parameters.

For now, the model can answer math word problems or explain a joke, thanks to what Google describes as a chain of thought, which allows it to describe multi-step problems as a series of intermediate steps.

An example that was shown with PaLM was the AI ​​model answering questions in Bengali and English. For example, Sundar Pichai, CEO of Google and Alphabet, asked the model about popular pizza toppings in New York, and the answer appeared in Bengali, although PaLM never saw parallel phrases in the language. .

Google’s hope is to extend these capabilities and techniques to more languages ​​and other complex tasks.

Multisearch on Lens

Google also announced new enhancements to its Lens Multisearch tool, which will allow users to search with just an image and a few words.

“In the Google app, you can search with images and text at the same time, much like you might point to something and ask a friend about it,” the company said.

Users will also be able to use an image or screenshot and add “near me” to see options for local restaurants or retailers that have clothing, homewares, and food, among other things.

With an advancement called “scene exploration”, users will be able to use Multisearch to pan their camera and instantly get information about multiple objects in a larger scene.

Immersive Google Maps

Google has announced a more immersive way to use its Maps app. Using computer vision and artificial intelligence, the company merged billions of Street View and aerial images to create a rich digital model of the world. With the new immersive view, users can experience what a popular neighborhood, landmark, restaurant, or place looks like.

Support for new languages ​​in Google Translate

Google also added 24 new languages ​​to Translate, including Assamese, Bhojpuri, Konkani, Sanskrit and Mizo. These languages ​​were added using “Zero-Shot Machine Translation”, where a machine learning model only sees monolingual text, meaning it learns to translate into another language without ever seeing any example.

However, the company noted that the technology is not perfect and that it will continue to improve these models.