Google Photos can now make automatic highlight videos of your life – here’s how

Google Photos is already capable of some increasingly impressive photo and video tricks – and now it's learned to create automatic highlight videos of the friends, family, and places you've chosen from your library.

The Google Photos app on Android and iOS already offers video creation tools, but this new update (rolling out from October 25) will let you search the people, places, or activities in your library that you'd like to star in an AI-created video. The app will then automatically rustle up a one-minute highlights video of all your chosen subjects.

This video will include a combination of video clips and photos, but Google Photos will also add music and sync the footage to those tunes. These kinds of auto-created highlight videos, which we've seen in the Google Photos Memories feature and elsewhere from the likes of GoPro, can be a little hit-and-miss in their execution, but we're looking forward to giving Google's new AI director a spin.

Fortunately, if you don't like some of Google Photos' choices, you can also trim or rearrange the clips, and pick some different music. You can see all of this in action in the example video below.

The Google Photos app showing an auto-created family video

(Image credit: Google)

So how will you be able to test-drive this new feature, once it rolls out on Android and iOS from October 25?

At the top of the app, hit the 'plus' icon and you'll see a new menu that includes options to create new Albums, Collages, Cinematic photos, Animations and, yes, Highlight videos.

Three phones on an orange background showing Google Photos video creation tools

(Image credit: Google)

Tap 'Highlight videos' and you'll see a search bar where you can search for your video stars, be that people, places, or even the years that events have taken place. From Google's demo, it looks like the default video length is one minute, but it's here that you can make further tweaks before hitting 'save'.

We've asked Google if this feature is coming to the web version of Google Photos and also Chromebooks, and will update this article when we hear back.

Tip of the AI iceberg

Google's main aim with photos and videos is to automate the kinds of edits that non-professionals have little time or appetite for – so this AI-powered video creator tool isn't a huge surprise.

We recently saw a related tool appear in Google Photos' Memories feature, which now lets you “co-author” Memories albums with friends and family. Collaborators can add their own photos and videos to your Memories, which can then be shared as a standalone video.

So whether you're looking to edit together your own highlights reels or, thanks to this new tool, let Google's algorithms do it for you, Google Photos is increasingly turning into the fuss-free place to do it.

The Google Pixel 8 Pro also recently debuted some impressive cloud-based video features, including Video Boost and Night Sight Video. The only slight shame is that these features require an internet connection rather than working on-device, though AI tools like Magic Eraser and Call Screen do at least work locally on your phone.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s final major update before Windows 12 could drop soon – and here’s what it will look like

Keen-eyed observers have spotted ISOs of the next version of Windows 11, Windows 11 23H2, on Microsoft’s servers. This suggests that the company is preparing the update for public rollout very soon. 

ISO files are digital versions or copies of a whole disk like CD, DVD, or Blu-ray – but all in a single smaller file. In this case, ISO files (or sometimes called ISO images) of Windows 11 23H2 have been seen on Microsoft’s servers. 

It’s also expected that Windows 11 23H2 will have all the new features from the recent “Moment 4” update to Windows 11 22H2, and introduce some new changes like an enhanced notification center, a System Components page, and Microsoft's shiny new AI assistant, Copilot.  While the Windows 11 23H2 update isn’t the most ground-breaking in Windows 11’s history, it’s still worth installing to get the new features and ensure your PC gets support from Microsoft.

With rumors that Windows 12 could be coming sooner rather than later, this may be the last major update we get to Windows 11.  The last major update to Windows 11 was version 22H2, which was released in September 2022, and has seen regular updates. Windows 11 22H2 Home, Pro, Pro Education, and Pro for Workstations editions will be supported by Microsoft until October 8, 2024, according to its lifecycle policy. Meanwhile, Windows 11 Enterprise and Education editions will be supported a little longer until October 14, 2025.

Young woman using a laptop inside at night

(Image credit: Getty Images)

How to get the new Windows 11 update

Microsoft will continue to put out security updates, bug fixes, or technical support for the above versions of Windows 11 22H2 up until those stated dates. That means you shouldn’t wait too long to upgrade to Windows 11 23H2, as upgrading will ensure you get the latest features and fixes. If you want to make the change sooner, we hope to see it available as an optional update in Windows Update very soon – and we’ll let you know as soon as it’s available to download.

Windows Latest, which reported on the existence of the ISOs, concludes that update 23H2 will be the last major update for Windows 11, with Microsoft expected to announce the next generation of Windows (which many people are calling “Windows 12”, despite Microsoft being understandably tight-lipped about any potential successor to Windows 11). Windows Latest also states that it’s known for some time that we’re to expect Windows 11 23H2 at some point in October or November of this year, and that seems spot on now that the ISOs of the update have appeared on Microsoft’s servers over the weekend, suggesting the launch is imminent. 

Apparently, there are two versions of the update ISOs, English (United States) and China, and we can reasonably conclude that the update is done and dusted (at least for these languages), and being prepared for commercial dispatch to users. What’s left is to watch for Microsoft’s official communications about the update.

You might also like

TechRadar – All the latest technology news

Read More

Chrome just got 5 updates to speed up your web browsing – here’s how to use them

Google just announced five new updates to its predictive search, with some updates arriving this week. You can already experiment with the improved search bar on Google Chrome and ChromeOS devices.

The search giant announced the update in a blog post on Wednesday promising the improvements will make browsing with Chrome’s address bar “even faster.”. 

Here are the highlights:

Smarter Autocompletion

Whenever you have a question, you want to find the answers fast. With an updated address bar, the search engine will better be able to predict what you’re looking for, even if you don't get the beginning of the URL right.  For example, when typing flights, Chrome’s omnibar on the desktop will suggest taking you to Google Flights. It may also take into consideration personal preferences such as preferred airline. No word on when this change is coming to mobile.

Dynamic results

The search bar in Chrome now boasts increased responsiveness, allowing users to receive faster and more visible results as soon as they begin typing the first letter of their query. This, combined with a new layout should mean faster and more readable access to the information you need. This update is on the desktop, only.

Chrome update autocorrect address bar

Chrome’s update can autocorrect URLs in address bar (Image credit: Google)

Typo Corrections

I can’t tell you how many times I’ve been rapidly typing and misspelled a url; swapping vowels or some other irregularity. Chrome will now detect these typos and immediately show what sites are similar enough based on your previously visited websites.

Bookmarks

For users who rely heavily on bookmarks to keep track of their favorite web pages, this update is a game-changer. Chrome now lets you search within your bookmark folders, making it more convenient to find those tucked-away pages. Whether you have an extensive collection of bookmarks or simply want to access a specific page more efficiently, this feature will help you stay organized and find what you need with ease.

Just remember that to search bookmarks through the address bar, you need to include the bookmark folder name.

Ever found yourself in need of an answer but unsure where to look? Google has addressed this dilemma with its latest update. Even if you haven't previously visited certain websites, the search engine will now suggest popular sites related to your query. This feature ensures that you're never left in the dark and can quickly discover sources of information through natural-language queries.

In all, these appear to be some useful quality-of-life updates to the address bar we all use so often. Now it's our turn to see how well they work.

You might also like

TechRadar – All the latest technology news

Read More

11 new AI projects announced at Adobe MAX 2023 – here’s why they could change everything

Adobe is currently holding its MAX 2023 event showing off what it has in store for the next year or so. One of the focal points of the conference was a series of 11 “Projects” that have the potential to become “important elements” of Adobe products in the future.

Recently, the company provided a sneak peek at one of these elements called Project Stardust, which has the ability to separate objects in a photograph into individual layers for easy editing. Users will have the ability to move objects around or delete them. From there, you can have a generative AI create something to take its place. The other 10 perform similarly as they harness AI technology to power their robust editing and creative capabilities. The group is split into three main categories. 

Photos

Alongside Stardust in the Photos category, you have Project See Through, a tool that removes reflections in a photograph. Adobe states that glass reflections can be really annoying since they can obscure subjects. Instead of having to go through a multi-step process of editing the image on Photoshop, See Through does it all for you quickly.

Image 1 of 2

Adobe Project Through before

(Image credit: Adobe)
Image 2 of 2

Adobe Project See Through after

(Image credit: Adobe)

Video & Audio

Similar to how Stardust can remove objects in images, Project Fast Fill can remove them in videos thanks to the company’s Generative Fill tech. It can also add or change content via “Firefly-powered text prompts.” In the example shown to us, Fast Fill can add a tie to a man whose suit doesn't have or alter latte art in a cup of coffee from a heart to a flower. 

See more

Next, Project Res Up can bump up the resolution of a clip via diffusion-based upsampling technology. Scene Change is third and it can swap out the background of a video from, say, an office building to a jungle. For audio, there’s Project Dub Dub Dub, a software tool claimed to be able to translate speech from one language to another “while preserving the voice of the original speaker”. 

3D & Design

For the last category, these five are all about helping users create – even if they’re not the best artist. 

Project Draw & Delight can turn your doodle into a polished drawing utilizing a text prompt to guide it. Glyph Ease “makes customized lettering more accessible” by instantly applying specific design elements to a word in Illustrator. All you have to do is provide a rough outline of what you want the AI to add.

Image 1 of 2

Project Draw & Delight before

(Image credit: Adobe)
Image 2 of 2

Project Draw & Delight after

(Image credit: Adobe)

The trio of 3D imaging software is more situational, but still impressive nonetheless.

Project Poseable’s AI can morph a 3D model to match “poses from photos of real people.” So if you upload a picture of someone striking a karate pose, the model will do the same. Project Primrose lets artists quickly alter the texture of a rendered piece of clothing. And finally, we have Neo which aids creators in creating 3D objects using  “2D tools and methods.

To reiterate what we said earlier, these projects are prototypes at the time of this writing. There’s no guarantee any of these will become a new feature in Photoshop or any other Adobe product. However, there are some we believe have the potential for an eventual release. 

Stardust, Res Up, as well as Draw & Delight, appear to be the most “complete”. There aren't as many visible flaws as with some of the others. Certain projects require more time in the oven in our opinion. For example, the voice from Dub Dub Dub sounds really stilted and robotic. It's not natural.

Be sure to check out TechRadar’s list of the best AI art generators of the year if you’re looking for ways to bolster content generation. 

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT can finally get up-to-date answers from the internet – here’s how

OpenAI isn't slowing down with the development of ChatGPT: only a few days after the AI chatbot added support for picture prompts and voice conversations, ChatGPT is now able to once again search the web and return answers that are right up to date.

“ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources,” says the OpenAI post on X. “It is no longer limited to data before September 2021.”

What that post doesn't mention is that this is the same functionality that ChatGPT briefly had early this year, before it was pulled in July – users were deploying the feature to get around paywalls and access paid-for content for free. As far as we can tell from our testing, OpenAI has now plugged that particular gap.

As before, ChatGPT uses Bing to search the web – no surprise considering how closely Microsoft and OpenAI have been working together in recent years. OpenAI suggests the feature is best used for “tasks that require up-to-date information”, like planning a vacation or doing some technical research.

How it works

ChatGPT and Bing

ChatGPT knows the iPhone 15 is out, but not who’s playing tonight (Image credit: Future)

Right now, you need to be a ChatGPT Plus subscriber to give the bot access to the web, which will set you back $ 20 / £16 a month. Enterprise users also get the feature right away, with access for everyone else coming “soon” OpenAI says.

Once you've logged into ChatGPT on the web, you need to select the GPT-4 engine, and then pick 'Browse with Bing' from the menu that pops up underneath. You can then start your conversation, and the bot will use information from the web in addition to the data it has access to from its regular training.

We successfully got ChatGPT to tell us when the iPhone 15 launched, and it even linked back to a reputable tech website with more information (though it wasn't TechRadar, sadly). That's one example of how the AI assistant now knows everything that's going on – as long as it's on the web.

The feature is still labeled as being in beta, so expect more refinements and improvements in the future. It's also only as good as Bing's search results, so for anything that the search engine isn't sure on – like tonight's soccer matches in the UK – you might get directed to a relevant website instead of seeing the answers.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft to lay out its ambitious vision for AI integration in Windows 11 – here’s what to expect

Microsoft is moving full steam ahead with its AI efforts, and could soon present its major plans in a special event. It’s anticipated that the company may take to the stage to announce continuing AI integration into Microsoft products, such as Windows 11, Microsoft 365 services, Surface, and others. 

It’s expected that we could even see this announcement as soon as today, Thursday 21, as Microsoft is due to host an event in New York City, which is widely-expected to focus on new Surface devices. However, The Verge points out that this is closely following the announcement of the resignation of the now former Windows and Surface chief Panos Panay.

The Verge has also gotten hold of a memo where Microsoft’s head of consumer marketing, Yusuf Mehdi, highly praises Panay. He then references today’s “special event” which will expand on Microsoft’s and OpenAI’s existing partnership and that this will be “only the beginning” of Microsoft’s AI-powered vision. 

Mehdi directly references Microsoft’s recent integration of OpenAI’s tech into Edge and Bing, as well as Microsoft 365 and Microsoft’s new AI voice assistant Windows Copilot. These improved tools will come installed on all new Windows 11 PCs, including Microsoft’s own Surface lineup and those of OEM partners (OEM means Original Equipment Manufacturer – meaning Microsoft’s partners that also make devices that come with Windows pre-installed, such as Dell). The memo wraps up with Mehdi saying the event “will lay out the vision for what’s ahead.” 

All eyes on the Microsoft event

Microsoft

(Image credit: Unsplash)

Microsoft's internal reshuffling

He went on to indicate that there will be innovations for Microsoft’s Surface, silicon, and devices, headed by Pavan Davuluri.  Interestingly, there have been reports that Microsoft is actively working on its own AI chips that could challenge Nvidia’s chips. 

We could see more information about this at today’s event. The Verge speculates that Microsoft might be trying to convince OEMs to use neural processing unit (NPU) chips that can efficiently handle AI tasks and future Windows versions (such as Windows 12) in their new devices. One model that is expected to debut at the event is the Surface Laptop Studio 2 and it could have one of these new NPU components. 

Mehdi has also spoken on how Microsoft is directing its leadership and organization of its workforce to increase focus on AI and Microsoft Copilot. He is seemingly being pushed by Microsoft as the public face of Windows now that Panay has left, although he doesn’t directly head up any core teams that develop and deliver Windows. The wider leadership approach Microsoft is taking for Windows development and device hardware is spread among three key people: Yusuf Mehdi, Pavan Davuluri, and Mikhail Parakhin, who heads up the team combining Windows, Web, and Services. 

The last of these, Pakhin, is currently Microsoft’s CEO of advertising and web services, and is considered the main engineering leader when it comes to Windows. He’s a less visible public figure, not even putting a profile picture on X (formerly Twitter). Mehdi will be the one to watch for updates about the larger Windows picture, whereas Davuluri and Parakhin will be tasked with making the AI vision for Windows and other devices a reality.

Man tapping a cloud icon

(Image credit: Shutterstock)

What Microsoft products might look like in the future

We’ve already seen the first steps of integrating the newly-improved Bing Search and recently added Bing AI right into Windows menus and the taskbar. It’s expected that this integration of web technologies, AI, and services right into Windows will continue. The Verge suggests that Windows is being pivoted to live fully on the web, and this was seemingly backed up in the FTC v. Microsoft hearing that featured an internal Microsoft presentation. This presentation laid out Microsoft’s plans to move the consumer version of Windows to fully live in the cloud.

Whatever the case is, I think we can expect to see something major, perhaps even a bold new direction, as Microsoft says goodbye to Panay. Mehdi wraps up his memo by putting out a call to action for his and other Microsoft staff, and it’s clear that Microsoft very much has his head in the game, despite the high-profile loss of Panay. Today’s event will certainly be an interesting one for anyone interested in the future of Windows and Windows devices. 

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s free version of Firefly finally exits beta – here’s how to access it

Adobe has announced it is expanding the general availability of its Firefly generative AI tool on the company’s free Express platform. 

More specifically, the Text to Image and Text Effects tools are finally exiting their months-long beta. The former, as the name suggests, allows users to create unique images just by entering a word prompt such as horses galloping or a monster in a forest. The latter lets people create floating text bubbles with fonts sporting special effects. These two are mainly used to create compelling content for a variety of use cases from enhancing plain-looking resumes to marketing material. Apparently the tools were a huge hit with users during the beta.

Firefly’s text features are available in over 100 different languages from Spanish, French, Japanese, and of course, English. What’s interesting is Adobe tells us the AI is “safe for commercial use.” Presumably, this means the model won’t generate anything inappropriate or totally random. What it does generate will fit the prompt you entered. 

How to use Firefly

Using the generative AIs is very easy to do. It honestly takes no time at all. First, head on over to the Adobe Express website, and then create an account if you haven’t done so already. Scroll down a little on the front page, and you’ll see the creation tools primed and ready to go.  

Adobe Express website

(Image credit: Future)

Enter whatever text prompt you have in mind, give Adobe Express a few seconds to generate the content, and you’re set. You can then edit the image further if you’d like via the kit on the left-hand side.

Adobe Firefly

(Image credit: Future)

Future updates

The rest of the Firefly update is mainly geared towards an entrepreneurial audience. Subscribers to either Adobe Creative Cloud or Express Premium will begin to receive Generative Credits that can be used to have Firefly create content. Additionally, the AI is being integrated into an Adobe asset library for businesses. There aren’t any new features for everyday, casual users – at least not right now. 

Adobe states it has plans to expand its Express platform within the coming months. Most notably, it wants to bring the “latest version” to mobile devices. So we might see the Firefly AI on smartphones by the end of the year. We reached out to Adobe for clarification. This story will be updated at a later time.

While we have you, be sure to check out TechRadar’s list of the best AI art generators for 2023. Any one of these is a good alternative for Firefly.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Meta Quest 3 gets new launch teaser – here’s what to expect at Meta Connect

The Meta Quest 3 has appeared in a new teaser that confirms it'll be announced at the Meta Connect event alongside news on “AI, virtual, mixed and augmented realities”.

This year's Meta Connect will take place on September 27 and will be a two-day virtual event, with Mark Zuckerberg's keynote taking place on that first day. This keynote has previously seen hardware announcements such as the Meta Quest Pro, alongside upgrades to the virtual social network Horizon Worlds.

So what are we expecting this year? Judging by the launch teaser below, the Meta Quest 3 – which was announced on June 1 in the days before Apple's WWDC 2023  – will be the star of the show. And while we already know a lot about the headset, Zuckerberg said there would be more announcements at Meta Connect 2023.

See more

The teaser doesn't reveal a great deal more that we didn't see already, in the original announcement in June, but again focuses on the three cameras/sensors on the front of the Quest 3 headset. These are likely to consist of two color passthrough cameras, plus an IR (infra-red) projector to help map your surroundings.

The main benefit of this full-color passthrough will be that the Quest 3 will be able to show the real world in color, rather than using the black-and-white passthrough seen on the Quest 2. 

The video also shows the headset's redesigned Touch Plus controllers, which are apparently more comfortable to hold than the Quest 3's and will have improved haptic feedback. This could, for example, adjust the level of feedback you feel when doing virtual boxing.

The Meta Quest 3 VR headset and controller

(Image credit: Meta)

But Meta Connect will also need to give us a glimpse of the software and mixed-reality experiences that will be possible with the Quest 3. And while the teaser doesn't give us a peak at any of those, the official Meta Connect page is promising a broader look at VR, AR and mixed realities.

The page also mentions AI, which could be referring to the rumored announcement of some new AI chatbots with different personalities, which are also expected in September.

Meta Connect: what to expect

Despite Meta making increasingly loud noises about its moves into AI – including developing its own AI chip and a speech-generating AI tool that's apparently too dangerous to release – we're still expecting the Quest 3 to be the main focus of Connect, and this teaser seemingly confirms that.

The main things we already know about the Quest 3 are that it'll offer full-color passthrough, have twice the graphical performance of the Quest 2, and be 40% thinner than its predecessor. That said, some leaks have suggested that it may also be marginally heavier than the Quest 2.

We also know that the Meta Quest 3 will cost $ 499 / £499 / AU$ 829 when it becomes available for pre-order, most likely immediately after Meta Connect on September 27. But the main thing that Meta needs to nail at Connect are the new software experiences that'll convince existing Quest 2 owners to upgrade.

The current state of the Quest Store suggests that few games and experiences are managing to break through to become widely-acclaimed hits. Still, new games for Meta's social VR app Horizon Worlds, like Super Rumble (above), suggest that Meta is retooling the platform to help it offer improved graphics and more sophisticated games.

With a smartphone version of Horizon Worlds also apparently en route for those who don't have a VR headset, plus a rumored new Smart Guardian feature to make it easier for Quest 3 owners to map their room, we can expect improvements across the board.

But exactly how much the $ 21 billion hit Meta's Reality Labs has seemingly taken in the last 18 months affects the Quest 3 is something we'll have to wait until September 27 to find out.

TechRadar – All the latest technology news

Read More

ChatGPT lands on Android in the United States – here’s how to use it

After a short pre-registration period, ChatGPT on Android is going live in select countries as developer OpenAI finally ends the head start given to iOS users. 

If you live in either the US, India, Bangladesh, or Brazil, you can now install the app from the Google Play Store onto your phone. Everyone else will have to wait a bit. The official OpenAI Twitter account states that its Android service will roll out to other global regions within the coming week.

The first thing you may notice upon downloading the app is it functions pretty much like ChatGPT on desktop or on iPhones. It’s the same generative AI service where you can ask it whatever question you may or ask for some pieces of advice. There are two ways to interact with the ChatGPT, either through typing in a text prompt or saying a voice comment through the in-app speech recognition feature. 

You can create a new login for the mobile AI, but you can sign in with a previously made account if you wish. All of your past prompts and history with ChatGPT will be found on the Android version. So don’t worry about missing a beat. 

ChatGPT on Android

(Image credit: OpenAI)

Features

The Settings menu does contain a couple of notable features that we should mention. Under Data Controls, users can select to share their chat history with the company to train their AI or deny the developer permission. There’s a way to export data into a separate file so you can then upload the information onto another account. Also, it’s possible to wipe out your chat history as well as delete your account.

It appears there are plans to one day introduce ChatGPT Plus to Android. This is a subscription service offering a number of things such as priority access during times of high demand to new features like access to the more advanced GPT-4 model. It’s unknown when ChatGPT Plus will arrive. We reached out to OpenAI for more info. This story will be updated at a later time.

ChatGPT on Android

(Image credit: OpenAI)

There isn’t much in the way of restrictions for ChatGPT on Android. At the very least, your device does need to be running Android 6, which came out in 2015. So as long as you own a phone made within the last decade or so, you can try out the app.

Major milestone

This launch is a very important milestone for the company as Android is actually the world’s most popular operating system. As of June 2023, Android makes up a little over 40 percent of the total OS market share followed by Windows at 28 percent then iOS at nearly 17 percent. It is nothing short of a behemoth in the industry. 

With the release, we can’t help but wonder how this will affect people’s lives. OpenAI is potentially introducing a transformative (yet controversial) piece of tech to people who’ve never used it before. 

On one hand, the chatbot can help vast amounts of users learn new topics or get advice based on information pulled from experts. It’s a more conversational and relaxed experience compared to figuring out how to get the response you want from a search engine. However, you do run into the risk of people becoming misinformed about a topic due to a hallucination. AIso, outputting false information remains the elephant in the room for much of the generative AI industry. The major players are making an effort to solve this problem; although it's unknown when hallucinations will finally become a thing of the past.

To get an idea on ways AI can help us, check out TechRadar’s list of the best AI tools for 2023

TechRadar – All the latest technology news

Read More

Google Messages update could be a game changer for messaging apps – here’s why

We may soon live in a world where large messaging platforms will be able to seamlessly communicate with each other. Google is taking the first step into this new world, announcing this week it will support the Message Layer Security (MLS) standard with plans to incorporate the protocol into its Messages app.

As Google points out in its Security Blog announcement, one of the annoyances concerning messaging apps is the lack of interoperability. Each platform has differing opinions on what they consider to be robust end-to-end encryption for texts. Developers don’t want to lower their “security standards to cater for the lowest common denominator and raise implementation costs”. If they did, the result would be, as Google puts it, “a spaghetti of ad hoc middleware” potentially endangering user information. MLS, however, aims to be a universal standard for everyone. It could be the solution these tech companies need.

Better interoperability

Google claims MLS “enables practical interoperability across services and platforms”. It goes on to say the protocol is “flexible enough… to address emerging threats to… [user] security”. Imagine being able to contact someone on WhatsApp and then shooting a text over to a friend on Telegram right from your messaging app of choice. You won’t need five different apps on your smartphone to stay in contact with people and you won't have to worry about a lack of security.

As stated earlier, Google Messages will one day support the new encryption protocol. In addition to the update, the company will open-source its MLS implementation into the “Android codebase.” This could result in developers having an easier time incorporating MLS into their software – if they choose to adopt it, of course. Right now, Google is the only brand that we’re aware of announcing its support. Mozilla has posted a sort-of rallying cry to its blog calling MLS an “internet standard”, but it doesn’t appear the Firefox developer plans on adding it to its browser.

Cost of doing business

There is one line in the post that we found particularly interesting. Google says it is “strongly supportive of regulatory efforts [requiring] interoperability for large end-to-end messaging platforms.” As 9To5Google points out in their report, this could be a reference to the Digital Markets Act, a law passed by the European Union last year demanding tech corporations increase the “level of interoperability between messaging services” among other things. And if they don’t comply, the violators “could be fined up to 20 percent” of global revenue for repeated offenses.

Google is willing to play by the new rules. It’s even willing to help other Android devs by open-sourcing its future MLS code. But what about Apple? Will iMessage support the protocol?

Honestly, who knows? We doubt Apple will ever want to play nice with others. It has repeatedly rebuffed Google’s advances to put RCS (Rich Communication Services) on iOS. It’s even willing to “pull iMessage from UK iPhones rather than weaken its security”. Sure, the massive EU fine could change Apple's mind or it might simply accept it as a cost of doing business in Europe. 

TechRadar – All the latest technology news

Read More