Google Search can help people learn English with new language tutor tool

Duolingo may have a new rival on its hands as Google Search on Android will soon begin helping people in certain countries practice and improve speaking English.

Over the next few days or so, the company will be rolling out an “interactive speaking” tool to users in Argentina, Colombia, India, Indonesia, Mexico and Venezuela. It provides practice sessions where students will be asked questions and they must verbally respond to them “using a provided vocabulary word” in their answer. As an example, Google Search might ask “what do you do for fun?” with the vocab word being “Play”. Students can respond by saying “I play video games in my free time” or “I like to play sports with friends”. Above the question will be a little animation of a cartoon character interacting with you.

Each session lasts about three to five minutes. After which, the tech will deliver “personalized feedback” as well as the “option to sign up for daily reminders” to continue lessons so you don't fall behind.

English tutor on Google Search

(Image credit: Future)

According to the announcement post, the feature can be accessed through a small window under Google Translate on the search engine. Tapping it activates the lesson. Once done, you’ll be taken to a “Speak” section where users can see a calendar of how many times per week they practice, the total amount of words practiced, and the classes they’re a part of. 

You can try out multiple courses at once. Plus you can pause them whenever you want if you’re short on time. Google states that since this will be on Android phones, people can learn “at their own pace, anytime, anywhere.” 

Focusing on context

Something we found particularly interesting is the type of feedback students will receive because it focuses heavily on context. 

You have semantic feedback, telling users if their “response was relevant to the question” at hand and if it could be understood by the other person (or in this case, the AI). It’ll also teach you ways to improve your grammar by pointing out missing words. Below the feedback will be a series of sample answers “at varying levels of language complexity”. They’re meant to show a student alternate ways of responding to questions. You don’t always have to say the same thing – that’s the idea Google wants to teach.

Google's personalized feedback

(Image credit: Future)

Additionally, the search engine will provide “contextual translation” if someone is having a tough time understanding a phrase. You can tap on any word in a sentence to see what it means in a particular context.

Future expansion

We highly recommend reading through the post on the company’s Research blog as it explains the technology behind this feature. It's rather interesting. It explains how the feature is powered by several machine learning models like LaMDA, the same AI behind Google Bard.

Google does have plans to expand its language tutor to “more countries and languages in the future” although no word on exactly when this update will arrive. So we reached out asking for more details on its future availability. We also wanted to know if the tutor will ever arrive on desktop or iOS. At this time, it’ll remain exclusive to Android. We will update this story at a later time.

While we have you, be sure to check out TechRadar's list of the best language learning apps for 2023.

You might also like

TechRadar – All the latest technology news

Read More

In an effort to surpass ChatGPT, Meta is open-sourcing its large language AI model

In what could be an effort to catch up to – and maybe even surpass – the internet’s current AI darling ChatGPT, Meta will be open-sourcing its large language model, LLaMA 2, in collaboration with Microsoft.

This makes LLaMA 2 free to use for commercial and research purposes, putting it on the same level as OpenAI’s ChatGPT and Microsoft Bing.

The announcement was part of Microsoft’s Inspire event, where Microsoft revealed more about the AI tools making an appearance in the Microsoft 365 platform, as well as how much they would cost users. Meta’s explanation of the decision to make LLaMA 2 an open-source tool is to allow businesses, researchers, and developers access to more AI tools. 

Free roam LLaMA 

Rather than hiding the LLM behind a paywall, Meta will allow anyone and everyone to have access to LLaMA 2, thus opening up the door to more potential AI tools being built upon the model – which could be used by the general public.

This move also gives Meta a sense of transparency, something that has been conspicuously missing from some AI development projects; Meta notably also showed surprising restraint with regard to the release (or lack thereof) of its powerful new speech-generation AI, Voicebox.

According to The Verge, Meta will be offering the open-source LLM via Microsoft's Windows and Azure platforms, as well as AWS and Hugging Face. In a statement from the tech giant, the company states that “We believe an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing”. 

Meta seems to be pushing for a community angle with the move, hoping that interested user bases will help stress test and troubleshoot issues with LLaMA 2 quickly and collaboratively. It’s a bold move, and could be the perfect way to counteract Meta’s recent struggles in the AI arena. I can’t wait to see what people make with access to an open-source AI model.

TechRadar – All the latest technology news

Read More

Meta’s ChatGPT rival could make language barriers a thing of the past

The rise of AI tools like ChatGPT and Google Bard has presented the perfect opportunity to make significant leaps in multilingual speech projects, advancing language technology and promoting worldwide linguistic diversity.

Meta has taken up the challenge, unveiling its latest AI language model – which is able to recognize and generate speech in over 4,000 spoken languages.

The Massively Multilingual Speech (MMS) project means that Meta’s new AI is no mere ChatGPT replica. The model uses unconventional data sources to overcome speech barriers and allow individuals to communicate in their native languages without going through an exhaustive translation process.

Most excitingly, Meta has made MMS open-source, inviting researchers to learn from and expand upon the foundation it provides. This move suggests the company is deeply invested in dominating the AI language translation space, but also encourages collaboration in the field.

Bringing more languages into the conversation 

Normally, speech recognition and text-to-speech AI programs need extensive training on a large number of audio datasets, combined with meticulous transcription labels. Many endangered languages found outside industrialised nations lack huge datasets like this, which puts these languages at risk of vanishing or being excluded from translation tools.

According to Gizmochina, Meta took an interesting approach to this issue and dipped into religious texts. These texts provide diverse linguistic renditions that allow Meta to get a ‘raw’ and untapped look at lesser-known languages for text-based research.

The release of MMS as an open-source resource and research project demonstrates that Meta is devoting a lot of time and effort towards the lack of linguistic diversity in the tech field, which is frequently limited to the most widely-spoken languages.

It’s an exciting development in the AI world – and one that could bring us a lot closer to having the sort of ‘universal translators’ that currently only exist in science fiction. Imagine an earpiece that, through the power of AI, could not only translate foreign speech for you in real time but also filter out the original language so you only hear your native tongue being spoken.

As more researchers work with Meta’s MMS and more languages are included, we could see a world where assistive technology and text-to-speech could allow us to speak to people regardless of their native language, sharing information so much quicker.  I’m super excited for the development as someone trying to teach themselves a language as it’ll make real-life conversational practice a lot easier, and help ghetto grips with informal and colloquial words and phrases only native speakers would know.

TechRadar – All the latest technology news

Read More

Google Docs is having some serious issues with its new “inclusive language” warnings

Google is nothing if not helpful: the search giant has built its reputation on making the internet more accessible and easier to navigate. But not all of its innovations are either clever or welcome. 

Take the latest change to Google Docs, which aims to highlight examples of non-inclusive language through pop-up warnings. 

You might think this is a good idea, helping to avoid “chairman” or “fireman” and other gendered language – and you'd be right. But Google has taken things a step further than it really needed to, leading to some pretty hilarious results.

Inclusive?

A viral tweet was the first warning sign that perhaps, just perhaps, this feature was a little overeager to correct common word usages. After all, is “landlord” really an example of of “words that may not be inclusive to all readers”? 

As Vice has ably demonstrated, Google's latest update to Docs – while undoubtedly well-intentioned – is annoying and broken, jumping in to suggest corrections to some things while blatantly ignoring others. 

See more

A good idea, poorly executed 

The idea behind the feature is well-meaning and will likely help in certain cases. The execution, on the other hand, is poor. 

Vice found that Docs suggested more inclusive language in a range of scenarios, such as for “annoyed” or “Motherboard”, but failed to suggest anything when a speech from neo-Nazi Klan leader David Duke was pasted in, containing the N-word. 

In fact, Valerie Solanas’ SCUM Manifesto – a legendary piece of literature – got more edits than Duke's speech, including suggesting “police officers” instead of “policemen”. 

All in all, it's the latest example of an AI-powered feature that seems like a good idea but in practice has more holes than a Swiss cheese. 

Helping people write in a more inclusive way is a lofty goal, but the implementation leaves a lot to be desired and, ultimately, makes the process of writing harder. 

Via Vice

TechRadar – All the latest technology news

Read More

Google Meet aims to tear down the language barrier, but falls short

Google has rolled out an update for video conferencing software Meet that will help workers communicate more effectively with multi-lingual colleagues.

In a blog post, the company announced that its live translation feature has now entered general availability, across all Google Meet platforms.

Launched in beta last year, the feature introduces the ability to translate spoken English into foreign language captions in real-time. At launch, supported languages include French, German, Portuguese and Spanish.

Google Meet translation

Among the various opportunities brought about by the transition to remote working is the ability to recruit from an international pool of talent. However, businesses will clearly need a way to address the communication barriers this may create.

At the moment, Google is pitching the translation feature as a way to overcome disparities in language proficiency, rather than a way to facilitate communication between people who do not share a common language.

“Translated captions help make Google Meet video calls more inclusive and collaborative by removing language proficiency barriers. When meeting participants consume content in their preferred language, this helps equalize information sharing, learning, and collaboration and ensures your meetings are as effective as possible for everyone,” explained Google.

However, if the idea is taken to its logical conclusion, it’s easy to imagine the feature being extended in future to support omnidirectional translation between a variety of different languages. This way, workers could communicate freely with colleagues and partners from across the globe.

The feature as it exists today will roll out over the course of the next two weeks, but only to Google Workspace customers that subscribe to the Business Plus plan and beyond.

TechRadar Pro has asked Google whether customers on cheaper plans can expect to receive access to live translation at a later date, and whether the feature will be capable of translating other languages into English in future.

TechRadar – All the latest technology news

Read More