Forget Duolingo – Google Translate just got a massive AI upgrade that gives it over 100 new languages

Google Translate is adding 110 new languages to its library, the largest expansion ever made to the platform. The update leverages Google's PaLM 2 large language model, an artificial intelligence tool that helps accurately translate across a wider array of languages than before. Those languages are spoken by approximately 614 million people, or about 8% of the global population. 

The list includes some widely spoken tongues, dialects, and languages that are native to smaller communities. Notably, African languages saw their biggest expansion, with Fon, Kikongo, Luo, Ga, Swati, Venda, and Wolof joining the list. On the other end of the spectrum, Cantonese is likely one of the most widely spoken languages on the new list, as is Punjabi (Shahmukhi), the most spoken language in Pakistan. 

There's also the Sicilian version of Italian, Manx, a Celtic language spoken on the Isle of Man that nearly went extinct, and a Papua New Guinea creole called Tok Pisin.

PaLM 2 Talk

For languages that blend regional dialects and different spelling standards, Google goes for something that might be best understood by most people, as with the Romani language offered by Google Translate, which includes three different dialects. 

The PaLM 2 LLM made the update possible, enhancing Google Translate's ability to learn and shift between languages efficiently. This model is particularly adept at handling languages that are closely related, like Awadhi and Marwadi, to Hindi or the various French creoles. PaLM 2's advanced capabilities allow it to manage the nuances and variations within these languages, providing more accurate and culturally relevant translations. 

The application of PaLM 2 to Google Translate is also interesting due to its origins as a tool for helping foster communications between humans and AI. For instance, both PaLM and PaLM 2 have been employed for helping teach robots how to carry out tasks and process commands from humans into steps to complete those tasks.

Potentially, the best part is that it's available on the web or via the Google Translate app on Android and iOS. 

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT’s free tier just got a massive upgrade – so stop paying for ChatGPT Plus

Following its GPT-4o announcement during its Spring Update event, OpenAI has finally made its new AI tools available for free to everyone, begging the question: Is there any point paying for ChatGPT?

With ChatGPT-4o, all users can now access more advanced tools like discussing files and photos you upload to ChatGPT. The generative AI can also conduct data analysis and create charts, and it can access the internet to inform its responses. 

However, with all of these features rolling out to everyone – even if they come with usage limits for non-paying users – there’s a big question of if people should stay subscribed to OpenAI’s premium tier for ChatGPT.

It’s not like ChatGPT Plus has become entirely obsolete. Subscribers still have exclusive features like the ability to create custom GPTs, higher usage rate limits with 4o, and first access to new features – this includes early access to Voice Mode when it launches “in the coming weeks.”

But it’s understandable why subscribers feel a little burned. They’re paying $ 20 (around £16 / AU$ 30) per month for a service that’s not that different from the free one. Unless you’re an AI power user, now’s seemingly a terrible time to sign up for ChatGPT Plus.

Thinking long term

A close up of ChatGPT on a phone, with the OpenAI logo in the background of the photo

(Image credit: Shutterstock/Daniel Chetroni)

So why would OpenAI want to make its premium service less appealing? Well, there are two prevailing theories.

The far-fetched one is that OpenAI will soon release an early version of GPT-5, or at least some kind of exciting new features that’ll be exclusive to its paid members beyond the voiced version of ChatGPT. It’s not out of the question, though this feels like something OpenAI would have mentioned during its Spring Update event on May 13, so color us skeptical.

The likely reason is that OpenAI is changing track to focus on bringing in as many users as possible, rather than paid ones, at least for now.

That’s because a report recently revealed that hardly any of us use ChatGPT and other AI tools in our day-to-day lives. If OpenAI wants people to get excited by its tools it can’t then lock the best features away behind a paywall.

What’s more, ChatGPT’s rivals – like the Meta AI and Google Gemini – are free to use and offer many of the same premium tools at no cost. If it’s already a struggle to get people to use AI when it’s free, you can bet it’s significantly harder with a paywall in the way.

We’ll have to wait and see if ChatGPT Plus gets any improvements in the coming weeks, but if you’re currently subscribed (or thinking of joining) you might want to hold off for now.

You might also like

TechRadar – All the latest technology news

Read More

Google Search is getting a massive upgrade – including letting you search with video

Google I/O 2024's entire two-hour keynote was devoted to Gemini. Not a peep was uttered for the recently launched Pixel 8a or what Android 15 is bringing upon release. The only times a smartphone or Android was mentioned is how they are being improved by Gemini

The tech giant is clearly going all-in on the AI, so much so that the stream concludes by boldly displaying the words “Welcome to the Gemini era”. 

Among all the updates that were presented at the event, Google Search is slated to gain some of the more impressive changes. You could even argue that the search engine will see one of the most impactful upgrades in 2024 that it’s ever received in its 25 years as a major tech platform. Gemini gives Google Search a huge performance boost, and we can’t help but feel excited about it.

Below is a quick rundown of all the new features Google Search will receive this year.

1. AI Overviews

Google IO 2024

(Image credit: Google)

The biggest upgrade coming to the search engine is AI Overviews which appears to be the launch version of SGE (Search Generative Experience). It provides detailed, AI-generated answers to inquiries. Responses come complete with contextually relevant text as well as links to sources and suggestions for follow-up questions.

Starting today, AI Overviews is leaving Google Labs and rolling out to everyone in the United States as a fully-fledged feature. For anyone who used the SGE, it appears to be identical. 

Response layouts are the same and they’ll have product links too. Google has presumably worked out all the kinks so it performs optimally. Although when it comes to generative AI, there is still the chance it could hallucinate.

There are plans to expand AI Overviews to more countries with the goal of reaching over a billion people by the end of 2024. Google noted the expansion is happening “soon,” but an exact date was not given.

2. Video Search

Google IO 2024

(Image credit: Google)

AI Overviews is bringing more to Google Search than just detailed results. One of the new features allows users to upload videos to the engine alongside a text inquiry. At I/O 2024, the presenter gave the example of purchasing a record player with faulty parts. 

You can upload a clip and ask the AI what's wrong with your player, and it’ll provide a detailed answer mentioning the exact part that needs to be replaced, plus instructions on how to fix the problem. You might need a new tone arm or a cueing lever, but you won't need to type in a question to Google to get an answer. Instead you can speak directly into the video and send it off.

Searching With Video will launch for “Search Labs users in English in the US,” soon with plans for further expansion into additional regions over time. 

3. Smarter AI

Google IO 2024

(Image credit: Google)

Next, Google is introducing several performance boosts; however, none of them are available at the moment. They’ll be rolling out soon to the Search Labs program exclusively to people in the United States and in English. 

First, you'll be able to click one of two buttons at the top to simplify an AI Overview response or ask for more details. You can also choose to return to the original answer at any time.

Second, AI Overviews will be able to understand complex questions better than before. Users won’t have to ask the search engine multiple short questions. Instead, you can enter one long inquiry – for example, a user can ask it to find a specific yoga studio with introductory packages nearby.

Lastly, Google Search can create “plans” for you. This can be either a three-day meal plan that’s easy to prepare or a vacation itinerary for your next trip. It’ll provide links to the recipes plus the option to replace dishes you don't like. Later down the line, the planning tool will encompass other topics like movies, music, and hotels.

All about Gemini

That’s pretty much all of the changes coming to Google Search in a nutshell. If you’re interested in trying these out and you live in the United States, head over to the Search Labs website, sign up for the program, and give the experimental AI features a go. You’ll find them near the top of the page.

Google I/O 2024 dropped a ton of information on the tech giant’s upcoming AI endeavors. Project Astra, in particular, looked very interesting, as it can identify objects, code on a monitor, and even pinpoint the city you’re in just by looking outside a window. 

Ask Photos was pretty cool, too, if a little freaky. It’s an upcoming Google Photos tool capable of finding specific images in your account much faster than before and “handle more in-depth queries” with startling accuracy.

If you want a full breakdown, check out TechRadar's list of the seven biggest AI announcements from Google I/O 2024.

You might also like

TechRadar – All the latest technology news

Read More

One of Microsoft’s biggest Windows 11 updates yet brought a massive number of security flaw fixes

Microsoft has issued a mammoth Windows 11 update that brings fixes for around 150 security flaws in the operating system, as well as fixes for 67 Remote Code Execution (RCE) vulnerabilities. RCEs enable malicious actors to deploy their code to a target device remotely, often being able to do so without a person’s consent or knowledge – so this is a Windows 11 update you definitely want to install ASAP. 

This update was rolled out on Microsoft’s Patch Tuesday (the second Tuesday of every month), a monthly update when Microsoft releases security updates. 

Three of these were classed as ‘critical’ vulnerabilities, meaning that Microsoft saw them as posing a particularly hefty risk to users. According to Bleeping Computer, more than half of the RCE vulnerabilities were found in Microsoft SQL drivers; essential software components that facilitate communication between Microsoft apps and its servers, leading to speculation that the SQL drivers share a common flaw that is being exploited by malicious users. 

The three vulnerabilities classed as ‘critical’ had to do with Windows Defender, ironically an app designed by Microsoft to protect users from online threats. 

Windows Defender extension for Chrome

(Image credit: Future)

A possibly record-setting update

KrebsonSecurity, a security news site, claims that this security update sets a record for the number of Windows 11 issues addressed, making it the largest update Microsoft has released this year (so far) and the largest released since 2017. 

The number of bugs is broken down as follows:

  • 31 Elevation of Privilege Vulnerabilities
  • 29 Security Feature Bypass Vulnerabilities
  • 67 Remote Code Execution Vulnerabilities
  • 13 Information Disclosure Vulnerabilities
  • 7 Denial of Service Vulnerabilities
  • 3 Spoofing Vulnerabilities

These spanned across several apps and functionalities, including Microsoft Office apps, Bitlocker, Windows Defender, Azure, and more. 

Two zero-day loopholes that were cause for concern

Two zero-day vulnerabilities were also addressed by Microsoft in April’s Patch Tuesday update, and apparently, they have been exploited in malware attacks. Zero-day vulnerabilities are flaws in software that potentially harmful actors find and possibly exploit before the software’s developers discover it. The zero refers to the proverbial buffer of time that developers have in terms of urgency to develop a patch to address the issue. 

Microsoft hasn’t said whether the zero-day flaws were being actively exploited, but this information was shared by Sophos (a software and hardware company) and Trend Micro (a cybersecurity platform). 

One of these has been labeled CVE-2024-26234 by Microsoft, and it’s been classed as a Proxy Drive Spoofing Vulnerability. The other, CVE-2024-29988, was classed as a SmartScreen Prompt Security Feature Bypass Vulnerability.

You can see the full list of vulnerabilities in a report by Bleeping Computer. Mashable points to the fact that Windows necessitates such a vast number of patches and changes because Windows is used as the operating system on different manufacturers’ machines and has to constantly keep up with accommodating a variety of hardware configurations.   

Some users might find Windows 11’s need for frequent updates annoying, which could lead them to consider alternative operating systems like macOS. If you’re sticking with Windows 11, KrebsonSecurity recommends that you back up your computer’s data before installing the update. I’m glad Microsoft continues to address bugs and security risks in Windows 11, even if that does mean we’re nagged to update the OS more than some of its competitors, and I would urge users to make sure that they install this update, which you can do through Windows Update if your PC hasn’t started this process already. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

GPT-4 is bringing a massive upgrade to ChatGPT

OpenAI has officially announced GPT-4 – the latest version of its incredibly popular large language model powering artificial intelligence (AI) chatbots (among other cool things).

If you’ve heard the hype about ChatGPT (perhaps at an incredibly trendy party or a work meeting), then you may have a passing familiarity with GPT-3 (and GPT-3.5, a more recent improved version). GPT is the acronym for Generative Pre-trained Transformer, a machine learning technology that uses neural networks to bounce around raw input information tidbits like ping pong balls and turn them into something comprehensible and convincing to human beings. OpenAI claims that GPT-4 is its “most advanced AI system” that has been “trained using human feedback, to produce even safer, more useful output in natural language and code.”

GPT-3 and GPT-3.5 are large language models (LLM), a type of machine learning model, from the AI research lab OpenAI and they are the technology that ChatGPT is built on. If you've been following recent developments in the AI chatbot arena, you probably haven’t missed the excitement about this technology and the explosive popularity of ChatGPT. Now, the successor to this technology, and possibly to ChatGPT itself, has been released.

Cut to the chase

  • What is it? GPT-4 is the latest version of the large language model that’s used in popular AI chatbots
  • When is it out? It was officially announced March 14, 2023
  • How much is it? It’s free to try out, and there are subscription tiers as well

When will ChatGPT-4 be released?

GPT-4 was officially revealed on March 14, although it didn’t come as too much of a surprise, as Microsoft Germany CTO, Andreas Braun speaking at the AI in Focus – Digital Kickoff event, let slip that the release of GPT-4 was imminent. 

It had been previously speculated that GPT-4 would be multimodal, which Braun also confirmed. GPT-3 is already one of the most impressive natural language processing models (NLP models), models built with the aim of producing human-like speech, in history. 

GPT-4 will be the most ambitious NLP we have seen yet as it will be the largest language model in existence.

A man in a suit using a laptop with a projected display showing a mockup of the ChatGPT interface.

ChatGPT is about to get stronger. (Image credit: Shutterstock)

What is the difference between GPT-3 and GPT-4?

The type of input Chat GPT (iGPT-3 and GPT-3.5) processes is plain text, and the output it can produce is natural language text and code. GPT-4’s multimodality means that you may be able to enter different kinds of input – like video, sound (e.g speech), images, and text. Like its capabilities on the input end, these multimodal faculties will also possibly allow for the generation of output like video, audio, and other types of content. Inputting and outputting both text and visual content could provide a huge boost in the power and capability of AI chatbots relying on ChatGPT-4.

Furthermore, similar to how GPT-3.5 was an improvement on GPT-3’s chat abilities by being more fine-tuned for natural chat, the capability to process and output code, and to do traditional completion tasks, GPT-4 should be an improvement on GPT-3.5’s understanding.  One of GPT-3/GPT-3.5’s main strengths is that they are trained on an immense amount of text data sourced across the internet. 

Bing search and ChatGPT

(Image credit: Rokas Tenys via Shutterstock)

What can GPT-4 do?

GPT-4 is trained on a diverse spectrum of multimodal information. This means that it will, in theory, be able to understand and produce language that is more likely to be accurate and relevant to what is being asked of it. This will be another marked improvement in the GPT series to understand and interpret not just input data, but also the context within which it is put. Additionally, GPT-4 will have an increased capacity to perform multiple tasks at once.

OpenAI also claims that GPT-4 is 40% more likely to provide factual responses, which is encouraging to learn since companies like Microsoft plan to use GPT-4 in search engines and other tools we rely on for factual information. OpenAI has also said that it is 82% less like to respond to requests for ‘disallowed’ content.

Safety is a big feature with GPT-4, with OpenAI working for over six months to ensure it is safe. They did this through an improved monitoring framework, and by working with experts in a variety of sensitive fields, such as medicine and geopolitics, to ensure the replies it gives are accurate and safe.

These new features promise greater ability and range to do a wider variety of tasks, greater efficiency of processing resources, the ability to complete multiple tasks simultaneously, and the potential for greater accuracy, which is a concern among current AI-bot and search engine engineers.

How GPT-4 will be presented is yet to be confirmed as there is still a great deal that stands to be revealed by OpenAI. We do know, however, that Microsoft has exclusive rights to OpenAI’s GPT-3 language model technology and has already begun the full roll-out of its incorporation of ChatGPT into Bing. This leads many in the industry to predict that GPT-4 will also end up being embedded in Microsoft products (including Bing). 

We have already seen the extended and persistent waves caused by GPT-3/GPT-3.5 and ChatGPT in many areas of our lives, including but not limited to tech such as content creation, education, and commercial productivity and activity. When you add more dimensions to the type of input that can be both submitted and generated, it's hard to predict the scale of the next upheaval. 

The ethical discussions around AI-generated content have multiplied as quickly as the technology’s ability to generate content, and this development is no exception.

GPT-4 is far from perfect, as OpenAI admits. It still has limitations surrounding social biases – the company warns it could reflect harmful stereotypes, and it still has what the company calls 'hallucinations', where the model creates made-up information that is “incorrect but sounds plausible.”

Even so, it's an exciting milestone for GPT in particular and AI in general, and the pace at which GPT is evolving since its launch last year is incredibly impressive.

TechRadar – All the latest technology news

Read More

Massive Google Workspace update dials up the fight for hybrid working supremacy

Google has lifted the lid on a series of updates for its Workspace suite of productivity and collaboration software designed to cater to the needs of the hybrid working era.

Some of the upgrades are small, like the ability to react with emojis during video meetings, but others could have a major impact on the way in which workers collaborate on shared documents, presentations and spreadsheets.

Most significantly, Google says it will integrate Meet directly into Docs, Sheets and Slides in the coming weeks, which will allow Google Workspace users to quickly spin up a meeting when collaborating on a project. Unlike traditional screen-sharing, video feeds will be housed within a dedicated sidebar, positioned alongside the content the team is working on.

Google Workspace for hybrid working

Since the birth of G Suite in 2006, Google has competed directly with Microsoft in the office software space, going up against the famous Microsoft 365 suite, which houses the likes of Word, PowerPoint, Excel etc.

One of the defining features of Microsoft’s offering is tight integration between apps and services, extending all the way out to the Windows operating system on which most business computers run. And although Google stole the march on Microsoft when it came to the cloud-based model, individual G Suite apps have historically felt much more isolated.

When Google rebranded its productivity suite as Workspace in 2020, however, the company announced it would make a concerted effort to create a more “deeply integrated user experience”, by improving the level of interoperability between its various productivity apps.

Google Docs

Google Meet will soon be integrated directly into Docs, Slides and Sheets. (Image credit: Google)

The latest round of Google Workspace updates take strides towards achieving this goal, capitalizing on the full breadth of the suite to create functionality that should help workers improve their productivity in a hybrid working setting.

In addition to new synergies with Workspace office software, Google Meet will also receive a new picture-in-picture mode next month, which will allow Chrome users to bring up a floating meeting window that sits on top of other browser tabs.

And from a security perspective, Google is set to launch client-side encryption for Meet calls in May, with optional end-to-end encryption to follow by the end of the year, bringing the service on-par with Teams and Zoom.

To support asynchronous collaboration, meanwhile, Google is preparing a number of updates for its Spaces messaging platform. Most notably, the company is improving the search functionality to help users surface the most relevant conversations and rolling out Slack-like inline message threading, which is apparently a highly requested upgrade.

Google Workspace

(Image credit: Google)

“One of the hopeful signs of a return to normalcy is seeing many of our customers make plans to come back into their offices. And they’re asking for strategies that will make hybrid work a more equitable and productive experience for everyone. We’re also beginning our own transition to hybrid work in early April,” said Google.

“As we gear up for that, it feels like a time of optimism for new ways of working together and the potential for hybrid models to become the sustainable norm. When designed well, a hybrid model gives employees the flexibility to deliver their best from anywhere, while bringing them together thoughtfully for the power of in-person collaboration.”

TechRadar – All the latest technology news

Read More

Massive Google Workspace update dials up the fight for hybrid working supremacy

Google has lifted the lid on a series of updates for its Workspace suite of productivity and collaboration software designed to cater to the needs of the hybrid working era.

Some of the upgrades are small, like the ability to react with emojis during video meetings, but others could have a major impact on the way in which workers collaborate on shared documents, presentations and spreadsheets.

Most significantly, Google says it will integrate Meet directly into Docs, Sheets and Slides in the coming weeks, which will allow Google Workspace users to quickly spin up a meeting when collaborating on a project. Unlike traditional screen-sharing, video feeds will be housed within a dedicated sidebar, positioned alongside the content the team is working on.

Google Workspace for hybrid working

Since the birth of G Suite in 2006, Google has competed directly with Microsoft in the office software space, going up against the famous Microsoft 365 suite, which houses the likes of Word, PowerPoint, Excel etc.

One of the defining features of Microsoft’s offering is tight integration between apps and services, extending all the way out to the Windows operating system on which most business computers run. And although Google stole the march on Microsoft when it came to the cloud-based model, individual G Suite apps have historically felt much more isolated.

When Google rebranded its productivity suite as Workspace in 2020, however, the company announced it would make a concerted effort to create a more “deeply integrated user experience”, by improving the level of interoperability between its various productivity apps.

Google Docs

Google Meet will soon be integrated directly into Docs, Slides and Sheets. (Image credit: Google)

The latest round of Google Workspace updates take strides towards achieving this goal, capitalizing on the full breadth of the suite to create functionality that should help workers improve their productivity in a hybrid working setting.

In addition to new synergies with Workspace office software, Google Meet will also receive a new picture-in-picture mode next month, which will allow Chrome users to bring up a floating meeting window that sits on top of other browser tabs.

And from a security perspective, Google is set to launch client-side encryption for Meet calls in May, with optional end-to-end encryption to follow by the end of the year, bringing the service on-par with Teams and Zoom.

To support asynchronous collaboration, meanwhile, Google is preparing a number of updates for its Spaces messaging platform. Most notably, the company is improving the search functionality to help users surface the most relevant conversations and rolling out Slack-like inline message threading, which is apparently a highly requested upgrade.

Google Workspace

(Image credit: Google)

“One of the hopeful signs of a return to normalcy is seeing many of our customers make plans to come back into their offices. And they’re asking for strategies that will make hybrid work a more equitable and productive experience for everyone. We’re also beginning our own transition to hybrid work in early April,” said Google.

“As we gear up for that, it feels like a time of optimism for new ways of working together and the potential for hybrid models to become the sustainable norm. When designed well, a hybrid model gives employees the flexibility to deliver their best from anywhere, while bringing them together thoughtfully for the power of in-person collaboration.”

TechRadar – All the latest technology news

Read More