Meta’s ChatGPT rival could make language barriers a thing of the past

The rise of AI tools like ChatGPT and Google Bard has presented the perfect opportunity to make significant leaps in multilingual speech projects, advancing language technology and promoting worldwide linguistic diversity.

Meta has taken up the challenge, unveiling its latest AI language model – which is able to recognize and generate speech in over 4,000 spoken languages.

The Massively Multilingual Speech (MMS) project means that Meta’s new AI is no mere ChatGPT replica. The model uses unconventional data sources to overcome speech barriers and allow individuals to communicate in their native languages without going through an exhaustive translation process.

Most excitingly, Meta has made MMS open-source, inviting researchers to learn from and expand upon the foundation it provides. This move suggests the company is deeply invested in dominating the AI language translation space, but also encourages collaboration in the field.

Bringing more languages into the conversation 

Normally, speech recognition and text-to-speech AI programs need extensive training on a large number of audio datasets, combined with meticulous transcription labels. Many endangered languages found outside industrialised nations lack huge datasets like this, which puts these languages at risk of vanishing or being excluded from translation tools.

According to Gizmochina, Meta took an interesting approach to this issue and dipped into religious texts. These texts provide diverse linguistic renditions that allow Meta to get a ‘raw’ and untapped look at lesser-known languages for text-based research.

The release of MMS as an open-source resource and research project demonstrates that Meta is devoting a lot of time and effort towards the lack of linguistic diversity in the tech field, which is frequently limited to the most widely-spoken languages.

It’s an exciting development in the AI world – and one that could bring us a lot closer to having the sort of ‘universal translators’ that currently only exist in science fiction. Imagine an earpiece that, through the power of AI, could not only translate foreign speech for you in real time but also filter out the original language so you only hear your native tongue being spoken.

As more researchers work with Meta’s MMS and more languages are included, we could see a world where assistive technology and text-to-speech could allow us to speak to people regardless of their native language, sharing information so much quicker.  I’m super excited for the development as someone trying to teach themselves a language as it’ll make real-life conversational practice a lot easier, and help ghetto grips with informal and colloquial words and phrases only native speakers would know.

TechRadar – All the latest technology news

Read More

Google Docs is having some serious issues with its new “inclusive language” warnings

Google is nothing if not helpful: the search giant has built its reputation on making the internet more accessible and easier to navigate. But not all of its innovations are either clever or welcome. 

Take the latest change to Google Docs, which aims to highlight examples of non-inclusive language through pop-up warnings. 

You might think this is a good idea, helping to avoid “chairman” or “fireman” and other gendered language – and you'd be right. But Google has taken things a step further than it really needed to, leading to some pretty hilarious results.


A viral tweet was the first warning sign that perhaps, just perhaps, this feature was a little overeager to correct common word usages. After all, is “landlord” really an example of of “words that may not be inclusive to all readers”? 

As Vice has ably demonstrated, Google's latest update to Docs – while undoubtedly well-intentioned – is annoying and broken, jumping in to suggest corrections to some things while blatantly ignoring others. 

See more

A good idea, poorly executed 

The idea behind the feature is well-meaning and will likely help in certain cases. The execution, on the other hand, is poor. 

Vice found that Docs suggested more inclusive language in a range of scenarios, such as for “annoyed” or “Motherboard”, but failed to suggest anything when a speech from neo-Nazi Klan leader David Duke was pasted in, containing the N-word. 

In fact, Valerie Solanas’ SCUM Manifesto – a legendary piece of literature – got more edits than Duke's speech, including suggesting “police officers” instead of “policemen”. 

All in all, it's the latest example of an AI-powered feature that seems like a good idea but in practice has more holes than a Swiss cheese. 

Helping people write in a more inclusive way is a lofty goal, but the implementation leaves a lot to be desired and, ultimately, makes the process of writing harder. 

Via Vice

TechRadar – All the latest technology news

Read More

Google Meet aims to tear down the language barrier, but falls short

Google has rolled out an update for video conferencing software Meet that will help workers communicate more effectively with multi-lingual colleagues.

In a blog post, the company announced that its live translation feature has now entered general availability, across all Google Meet platforms.

Launched in beta last year, the feature introduces the ability to translate spoken English into foreign language captions in real-time. At launch, supported languages include French, German, Portuguese and Spanish.

Google Meet translation

Among the various opportunities brought about by the transition to remote working is the ability to recruit from an international pool of talent. However, businesses will clearly need a way to address the communication barriers this may create.

At the moment, Google is pitching the translation feature as a way to overcome disparities in language proficiency, rather than a way to facilitate communication between people who do not share a common language.

“Translated captions help make Google Meet video calls more inclusive and collaborative by removing language proficiency barriers. When meeting participants consume content in their preferred language, this helps equalize information sharing, learning, and collaboration and ensures your meetings are as effective as possible for everyone,” explained Google.

However, if the idea is taken to its logical conclusion, it’s easy to imagine the feature being extended in future to support omnidirectional translation between a variety of different languages. This way, workers could communicate freely with colleagues and partners from across the globe.

The feature as it exists today will roll out over the course of the next two weeks, but only to Google Workspace customers that subscribe to the Business Plus plan and beyond.

TechRadar Pro has asked Google whether customers on cheaper plans can expect to receive access to live translation at a later date, and whether the feature will be capable of translating other languages into English in future.

TechRadar – All the latest technology news

Read More