Signed, sealed, delivered, summarized: new Gemini-powered AI feature for Gmail looks like it’s close to launch

A summarize feature powered by Gemini, Google’s recently debuted generative AI model and digital assistance tool, is coming to the Gmail app for Android – and it could make reading and understanding emails much faster and easier. The feature is expected to roll out soon to all users, and they’ll be able to provide feedback by rating the quality of the generated email summaries. 

The feature has been suspected to be in the works for some time now, as documented by Android Authority, and now it’s apparently close to launch.

One of Android Authority’s code sleuth sources managed to get the feature working on the Gmail app version 2024.04.21.626860299 after a fair amount of tinkering. It’s not disclosed what steps they took, so if you want to replicate this, you’ll have to do some experimenting, but the fact that the summarize feature can be up and running shows that Android Gmail users may not have to wait very long.

There is a screenshot featured in Android Authority’s report showing a chat window where the user asks Gemini to summarize an email that they currently have open, and Gemini obliging. Apparently, this feature will be available via a ‘Summarize this email’ button under an email’s subject line, I assume triggering the above prompt, and this should return a compact summary of the email. This could prove to be especially helpful when dealing with a large number of emails, or for particularly long emails with many details.

Once the summary is provided, users will be shown thumbs up and thumbs down buttons under Gemini’s output, similar to OpenAI’s ChatGPT after it gives its reply to a user’s query. This will give Google a better understanding of how helpful the feature is to users and how it could be improved. There will also be a button that allows you to copy the email summary to your clipboard, according to the screenshot. 

A man working at an office and looking at his screen while using Gmail

(Image credit: Shutterstock/fizkes)

When to expect the new feature

The speculation is that the feature could be rolled out during Google’s I/O 2024 event, its annual developer conference, which is scheduled for May 14, 2024. Google is also expected to show off the next iteration of its Pixel A series, the Pixel 8A, it could show its development of augmented reality (AR) technology, and new software and service developments, especially for its devices and ChromeOS (the operating system that powers the best Chromebooks). 

Many Gmail users could potentially find this new summarize feature to be time-saving and that it streamlines their emails, but as with any generative AI, there are concerns about the accuracy of the generated text. If Gemini omits or misinterprets important information, it could lead to oversights or misunderstandings. I’m glad that Google has the feedback system in place, as this will show if the feature is actually serving its purpose well. We’ll have to wait and see, and the proof will be in the pudding whether it results in improved productivity and is reasonably accurate when it’s finally released. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Vision Pro spatial Personas are like Apple’s version of the metaverse without the Meta

While the initial hype over Apple Vision Pro may have died down, Apple is still busy developing and rolling out fresh updates, including a new one that lets multiple Personas work and play together.

Apple briefly demonstrated this capability when it introduced the Vision Pro and gave me my first test-drive last year but now spatial Personas is live on Vision Pro mixed-reality headsets.

To understand “spatial Personas” you need to start with the Personas part. You capture these somewhat uncanny valley 3D representations of yourself using Vision Pro's spatial (or 3D) cameras. The headset uses that data to skin a 3D representation of you that can mimic your face, head, upper torso, and hand movements and be used in FaceTime and other video calls (if supported).

Spatial Personas does two key things: it gives you the ability to put two (or more) avatars in one space and lets them interact with either different screens or the same one and does so in a spatially aware space. This is all still happening within the confines of a FaceTime call where Vision Pro users will see a new “spatial Persona” button.

To enable this feature, you'll need the visionOS 1.1 update and may need to reboot the mixed reality headset. After that you can at any time during a FaceTime Persona call tap on the spatial icon to enable the featue.

Almost together

Apple Vision Pro spatial Personas

(Image credit: Apple)

Spatial Personas support collaborative work and communal viewing experiences by combining the feature with Apple's SharePlay. 

This will let you “sit side-by-side” (Personas don't have butts, legs or feet, so “sitting” is an assumed experience) to watch the same movie or TV show. In an Environment (you spin the Vision Pro's digital crown until your real world disappears in favor of a selected environment like Yosemite”) you can also play multi-player games. Most Vision Pro owners might choose “Game Room”, which positions the spatial avatars around a game table. A spatial Persona call can become a real group activity with up with five spatial Personas participating at once.

Vision Pro also supports spatial audio which means the audio for the Persona on the right will sound like it's coming from the right. Working in this fashion could end up feeling like everyone is in the room with you, even though they're obviously not.

Currently, any app that supports SharePlay can work with spatial Personas but not every app will allow for single-screen collaboration. If you use window share or share the app, other personas will be able to see but not interact with your app window.

Being there

Apple Vision Pro spatial Personas

Freeform lets multiple Vision Pro spatial Personas work on the same app. (Image credit: Apple)

While your spatial Personas will appear in other people's spaces during the FaceTime call, you'll remain in control of your viewing experience and can still move your windows and Persona to suit your needs, while not messing up what people see in the shared experience.

In a video Apple shared, it shows two spatial Personas positioned on either side of a Freeform app window, which is, in and of itself somewhat remarkable. But things take a surprising turn when each of them can reach out with their Persona hands to control the app with gestures. That feels like a game-changer to me.

In some ways, this seems like a much more limited form of Meta CEO Mark Zuckerberg's metaverse ideal, where we live work and play together in virtual reality. In this case, we collaborate and play in mixed reality while using still somewhat uncanny valley avatars. To be fair, Apple has already vastly improved the look of these things. They're still a bit jarring but less so than when I first set mine up in February.

I haven't had a chance to try the new feature, but seeing those two floating Personas reaching out and controlling an app floating a single Vision Pro space is impressive. It's also a reminder that it's still early days for Vision Pro and Apple's vision of our spatial computing future. When it comes to utility, the pricey hardware clearly has quite a bit of road ahead of it.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini AI looks like it’s coming to Android tablets and could coexist with Google Assistant (for now)

Google’s new generative AI model, Gemini, is coming to Android tablets. Gemini AI has been observed running on a Google Pixel Tablet, confirming that Gemini can exist on a device alongside Google Assistant… for the time being, at least. Currently, Google Gemini is available to run on Android phones, and it’s expected that it will eventually replace Google Assistant, Google’s current virtual assistant that’s used for voice commands.

When Gemini is installed on Android phones, users would be prompted to choose between using Gemini and Google Assistant. It’s unknown if this restriction will apply to tablets when Gemini finally arrives for them – though at the moment it appears not. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

A discovery in Google Search's code

The news was brought to us via 9to5Google, which did an in-depth report on the latest beta version (15.12) of the Google Search app in the Google Play Store and discovered it contains code referring to using Gemini AI on a “tablet,” and would offer the following features: 

The code also shows that the Google app will host Gemini AI on tablets, instead of a standalone app that currently exists for Android phones. Google might be planning on a separate Gemini app for tablets and possibly other devices, especially if its plans to phase out Google Assistant are still in place. 

9to5Google also warns that this is still as it’s still a beta version of the Google Search app, Google can still change its mind and not roll out these features.

A woman using an Android phone.

(Image credit: Shutterstock/brizmaker)

Where does Google Assistant stand?

When 9to5Google activated Gemini on a Pixel Tablet, it found that Google Assistant and Gemini would function simultaneously. Gemini for Android tablets is yet to be finalized, so Google might implement a similar restriction that prevents both Gemini and Google Assistant running at the same time on tablets. When both were installed and activated, and the voice command “Hey Google” was used, Google Assistant was brought up instead of Gemini.

This in turn contradicted screenshots of the setup screen showing that Gemini will take precedence over Google Assistant if users choose to use it.

The two digital assistants don’t have the same features yet and we know that the Pixel Tablet was designed to act as a smart display that uses Google Assistant when docked. Because Google Assistant will be used when someone asks Gemini to do something it’s unable to do, we may see the two assistants running in parallel for the time being, until Gemini has all of Google Assistant's capabilities, such as smart home features. 

Meanwhile, Android Authority reports that the Gemini experience on the Pixel Tablet is akin to the Pixel Fold and predicts that Google’s tablets will be the first Android to gain Gemini capabilities. This makes sense, as Google may want to use Gemini exclusivity to encourage more people to buy Pixel tablets in the future. The Android tablet market is a highly competitive one, and advanced AI capabilities may help Pixel tablets stand out.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

A cheaper Meta Quest 3 might be coming, but trust me, it won’t look like the leaks

Over the past few days we’ve been treated to two separate Meta Quest 3 leaks – or more accurately, leaks for a new cheaper Quest 3 that’s either called the Meta Quest 3s or Meta Quest 3 Lite, depending on who you believe.

But while the phrase ‘where there's smoke there's fire’ can often ring true in the world of tech leaks, I’m having a really tough time buying what I’ve seen so far from the two designs.

Going in chronological order; the first to drop was a Meta Quest 3 Lite render shared by @ZGFTECH on Twitter.

See more

It looks an awful lot like an Oculus Quest 2 with its slightly bulkier design – perhaps because it seems to use the 2’s fresnel lens system instead of a slimmer pancake lens system like the Quest 3 – but with more rounded edges to match its namesake. 

Interestingly, it also lacks any kind of RGB cameras or a depth sensor – which for me is a massive red flag. Mixed reality is the main focus for XR hardware and software right now, so of all the downgrades to make for the Lite, removing full-color MR passthrough seems the most absurd. It’d be much more likely for Meta to give the Quest 3 Lite a worse display or chipset.

@ZGFTECH did later clarify that they aren’t saying the Quest 3 Lite lacks RGB cameras, just that their renders exclude them because they can’t reveal more “at the moment.” Though as I said before, I expect mixed reality would be a key Quest 3 Lite feature, so I’m more than a little surprised this detail is shrouded in mystery.

Then there’s the Meta Quest 3s leak. The original Reddit post has since been deleted, but copies like this Twitter post remain online.

See more

Just like the Meta Quest 3 Lite leaked design, this bulkier headset suggests a return to fresnel lenses. Although unlike the previous model, we see some possible RGB cameras and sensors on the front face panel. On top of this, we also get some more details about specs – chiefly that the cheaper Quest 3 could boast dual 1,832 x 1,920 pixel displays.

But while the design seems a little more likely (if a little too ugly), the leak itself is setting off my BS detectors. The first issue is that the shared images include elements of a Zoom call that might make it quite easy to determine who the leaker is. To see these early designs the leaker likely had to sign an NDA that would come with some kind of financial penalty for sharing the info, and unless they have zero care for their financial well being I would’ve expected them to be a lot more careful with what they do/don’t share lest they face the wrath of Meta’s well-funded legal team.

On top of this, some of the promotional assets seem a little off. Some of them feature the original Quest 3 rather than the new design, some of the images don’t seem super relevant to a VR gadget, plus ports and buttons seem to change positions and parts change color across various renders.

As such, I’m more than a little unconvinced that this is a genuine leak.

The Meta Quest 3 and controllers on their charging stand

(Image credit: Meta)

Meta Quest 3 Lite: fact or fiction? 

I guess the follow-up question from my skepticism over these leaks is: is a cheaper Meta Quest 3 even on the way? 

Inherently, the idea isn’t absurd. The Quest 3 may be cheaper than many other VR headsets, but at $ 499.99 / £479.99 / AU$ 799.99 it is pricier than the Quest was at launch – $ 299 / £299 / AU$ 479 – and its affordable price point is the central reason the Quest 2 sold phenomenally well.

I’ve previously estimated that the Quest 3 is selling slightly slower than its predecessor did at the same point in its lifespan, so Meta may be looking to juice its figures by releasing a cheaper model.

What’s more, while these leaks have details that leave me more than a little skeptical, the fact that we have had two leaks in such a short stretch of time leaves me feeling like there might be some validity to the rumors.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The Quest 3 Lite needs good quality mixed reality (Image credit: Meta)

So while we can't yet say for certain it's coming, I wouldn't be surprised if Meta announced a Quest 3 Lite or S. I'm just not convinced that it’ll look like either of these leaked designs.

For me, the focus would be on having a sleek mixed reality machine – which would require full-color passthrough and pancake rather than fresnel lenses (which we have seen on affordable XR hardware like the Pico 4).

The cost savings would then come from having lower resolution displays, less storage (starting at 64GB), and having a worse chipset or less RAM than we see in the Quest 3. 

We’ll have to wait and see if Meta announces anything officially. I expect we won’t hear anything until either its Meta Quest Gaming Showcase for 2024 – which is due around June – or this year’s Meta Connect event – which usually lands around September or October.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is planning to make Copilot behave like a ‘normal’ app in Windows 11

Windows 11 is set for a major change to the Copilot interface, or at least this is something that’s being tried out in testing.

With Windows 11’s preview build 26080 (in both Canary and Dev channels), Microsoft is adding a choice to free Copilot from the shackles that bind the AI assistant to the right-hand side of the screen.

Normally, the Copilot panel appears on the right, and you can’t do anything about that (although Microsoft has been experimenting with the ability to resize it, and other bits and bobs besides).

With this change, you can now undock Copilot, so the AI is in a normal app window, which can be moved wherever you want on the desktop, and resized appropriately. In other words, you’re getting a lot more versatility regarding where you want Copilot to appear.

Also in this preview build, more users are getting Copilot’s new abilities to alter Windows 11 settings. That functionality was already introduced to Canary testers, but is now rolling out to more of those folks, and Windows Insiders in the Dev channel too.

The extra capabilities include getting the AI assistant to empty the Recycle Bin, or turn on Live Captions, or Voice Access (there are a fair few new options on the accessibility front, in fact).


Analysis: Under the hood tinkering, too

Not all testers in the mentioned channels will see the ability to fully free Copilot and let the AI roam the desktop for a while yet, mind. Microsoft says it’s just starting the rollout – and it’ll only be for those in the Canary channel initially. A broader rollout will follow, with Microsoft asking for feedback as it goes, and adjusting things based on what it hears from Windows 11 testers, no doubt.

There are also some ‘under-the-hood improvements’ coming for Copilot as well, as mentioned in the blog post, but mysteriously, Microsoft doesn’t say what. We can only guess that this might be performance related, as that seems the most obvious way that tinkering in the background could improve things with Copilot. (Perhaps it’s to do with ensuring the smooth movement of the undocked panel for the AI, even).

You might also like…

TechRadar – All the latest technology news

Read More

Get ready to learn about what Windows 11 of the future looks like at Microsoft’s March 21 event

We’ve begun getting hints of what Microsoft is gearing up to announce for Windows 11 at its March event, and now we’ve got new pieces of the puzzle. We’re expecting information about a new feature for the Paint app, Paint NPU, and about a feature that’s being referred to as ‘AI Explorer’ internally at Microsoft. 

Microsoft has put up an official page announcing a special digital event named “New Era of Work” which will take place on March 21, starting at 9 PM PDT. On this page, users are met with the tagline “Advancing the new era of work with Copilot” and a description of the event that encourages users to “Tune in here for the latest in scaling AI in your environment with Copilot, Windows, and Surface.”

It sounds like we’re going to get an idea of what the next iteration of Windows Copilot, Microsoft’s new flagship digital AI assistant, will look like and what it’ll be able to do. It also looks like we might see Microsoft’s vision for what AI integration and features will look like for future versions of Windows and Surface products. 

A screenshot of the page announcing Microsoft's digital event.

(Image credit: Microsoft)

What we already know and expect

While we’ll have to wait until the event to see exactly what Microsoft wants to tell us about, we do have some speculation from Windows Latest that one feature we’ll learn about is a Paint app tool powered by new-gen machines’ NPUs (Neural Processing Units). These are processing components that enable new kinds of processes, particularly many AI processes.

This follows earlier reports that indicated that the Paint app was getting an NPU-driven feature, possibly new image editing and rending tools that make use of PCs’ NPUs. Another possible feature that Windows Latest spotted was “LiveCanvas,” which may enable users to draw real-time sketches aided by AI. 

Earlier this week, we also reported about a new ‘AI Explorer’ feature, apparently currently in testing at Microsoft. This new revamped version which has been described as an “advanced Copilot” looks like it could be similar to the Windows Timeline feature, but improved by AI. The present version of Windows Copilot requires an internet connection, but rumors suggest that this could change. 

This is what we currently understand about how the feature will work: it will make records of previous actions users perform, transform them into ‘searchable moments,’ and allow users to search these, as well as retract them. Windows Latest also reinforces the news that most existing PCs running Windows 11 won’t be able to use AI Explorer as it’s designed to use the newest available NPUs, intended to handle and assist higher-level computation tasks. The NPU would enable the AI Explorer feature to work natively on Windows 11 devices and users will be able to interact with AI Explorer using natural language

Using natural language means that users can ask AI Explorer to carry out tasks simply and easily, letting them access past conversations, files, and folders with simple commands, and they will be able to do this with most Windows features and apps. AI Explorer will have the capability to search user history and find information relevant to whatever subject or topic is in the user’s request. We don’t know if it’ll pull this information exclusively from user data or other sources like the internet as well, and we hope this will be clarified on March 21. 

Person working on laptop in kitchen

(Image credit: Getty Images)

What else we might see and what this might mean

 In addition to an NPU-powered Paint app feature and AI Explorer, it looks like we can expect the debut of other AI-powered features including an Automatic Super Resolution feature. This has popped up in Windows 11 23H4 preview builds, and it’s said to leverage PCs’ AI abilities to improve users’ visual experience. This will reportedly be done by utilizing DirectML, an API that also makes use of PCs’ NPUs, and will bring improvements to frame rates in games and apps.

March 21 is gearing up to bring what will at least probably be an exciting presentation, although it’s worth remembering that all of these new features will require an NPU. Only the most newly manufactured Windows devices will come equipped with these, which will leave the overwhelming majority of Windows devices and users in the dust. My guess is Microsoft is really banking on how great the new AI-driven features are to convince users to upgrade to these new models, and with the current state of apps and services like Windows Copilot, that’s still yet to be proven in practice.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft’s Copilot AI can now read your files directly, but it’s not the privacy nightmare it sounds like

Microsoft has begun rolling out a new feature for its Copilot AI assistant in Windows that will allow the bot to directly read files on your PC, then provide a summary, locate specific data, or search the internet for additional information. 

Copilot has already been aggressively integrated into Microsoft 365 and Windows 11 as a whole, and this latest feature sounds – at least on paper – like a serious privacy issue. After all, who would want an AI peeking at all their files and uploading that information directly to Microsoft?

Well, fortunately, Copilot isn’t just going to be snooping around at random. As spotted by @Leopeva64 on X (formerly Twitter), you have to manually drag and drop the file into the Copilot chat box (or select the ‘Add a file’ option). Once the file is in place, you can proceed to make a request of the AI; the suggestion provided by Leopeva64 is simply ‘summarize’, which Copilot proceeds to do.

Another step towards Copilot being genuinely useful

I’ll admit it, I’m a Copilot critic. Perhaps it’s just because I’m a jaded career journalist with a lifetime of tech know-how and a neurodivergent tilt towards unhealthy perfectionism, but I’ve never seen the value of an AI assistant built into my operating system of choice; however, this is the sort of Copilot feature I actually might use.

The option to summarize alone seems quite useful: more than once, I’ve been handed a chunky PDF with embargoed details about a new tech product, and it would be rather nice not to have to sift through pages and pages of dense legalese and tech jargon just to find the scraps of information that are actually relevant to TechRadar’s readership. Summarizing documents is already something that ChatGPT and Adobe Acrobat AI can do, so it makes sense for Copilot – an AI tool that's specifically positioned as an on-system helper – to be able to do it.

While I personally prefer to be the master of my own Googling, I can see the web-search capabilities being very helpful to a lot of users, too. If you’ve got a file containing partial information, asking Copilot to ‘fill in the blanks’ could save you a lot of time. Copilot appears capable of reading a variety of different file types, from simple text documents to PDFs and spreadsheets. Given the flexible nature of modern AI chatbots, there are potentially many different things you could ask Copilot to do with your files – though apparently, it isn’t able to scan files for viruses (at least, not yet).

If you’re keen to get your hands on this feature yourself, you hopefully won’t have to wait long. While it doesn’t seem to be widely available just yet, Leopeva64 notes that it appears Copilot’s latest new skill “is being rolled out gradually”, so it’ll likely start showing up for more Windows 11 users as time goes on.

The Edge version of Copilot will apparently be getting this feature too, as Leopeva points out that it’s currently available in the Canary prototype build of the browser – if you want to check that out, you just have to sign up for the Edge Insider Program.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 is losing Mail and Calendar apps – so you’ll have to use Outlook whether you like it or not

Microsoft has set a date to remove the Mail and Calendar applications at the end of the year from Windows 11, as well as dropping the apps from the Microsoft store. The company will also stop putting out updates or support once the year is out. So, if you haven’t moved over to Outlook, you’ve got until December 31, 2024 to do so. 

You may notice in the coming weeks a little pop-up will appear when you open the Mail or Calendar app trying to nudge you over to the Outlook app and give it a go if you haven’t already. Of course, Windows 11 devices released in 2024 will come with the updated Outlook app as the default mail app, so if you’re working on a new machine you’re likely already using Outlook. 

You’ll still have the option to ignore the prompt and carry on with Mail and Calendar, but only until the end of the year. If you’re still keen to stick with them, you will have to make sure they’re already installed and up to date on any device you plan to be using them on (like your home computer, work set up, personal laptop and so on). 

Sticking with it

Do bear in mind that you won’t be receiving any security updates or bug fixes once the cut-off point passes, and there’s no guarantee after the fact that Microsoft won’t bin them off entirely soon after. 

Users are still on the fence when it comes to embracing Microsoft Outlook, with some eager to get to know the updated interface and others adamant about not moving away from the familiar Mail and Calendar apps. Either way, you don’t seem to have much choice, and having yet another message from Microsoft pop up to encourage them to move to their newer software may not go down with users who are already sick of Microsoft's nagging.

Via Windows Latest

You might also like…

TechRadar – All the latest technology news

Read More

Here’s what third-party iPhone app stores will look like – and how they’ll work

Big changes are coming to the iOS App Store for users in the European Union (EU), as Apple has announced it will soon start allowing third-party app stores to distribute apps to users from a host of European nations. And now, we’ve gained our first look at what these stores could look like.

AltStore, an existing provider of “sideloaded” apps, has announced they’re working on bringing their own alternative app store to iOS. That will move the store out of its current gray area of providing unofficial apps and transform it into what its developer calls a “legitimate app marketplace“.

Right now, AltStore provides a range of apps that fall foul of Apple’s existing App Store rules. For example, it hosts Delta, a Nintendo games console emulator, and UTM, a virtual machine that allows you to run Linux, Windows and more on iOS.

AltStore’s developer did not outline exactly what changes it is planning to make, but one difference is likely to be the installation process. Right now, you have to install a server app onto your Mac or Windows PC, then connect your iOS device and install the app store from your computer. 

Once AltStore becomes has been approved by Apple as that “legitimate app marketplace,” you will likely simply be able to download the AltStore app directly to your iPhone, with no lengthy workaround process required. In theory, this will mean being able to download any apps you want, including ones that don't conform to Apple's own App Store guidelines.

The AltStore app running on an iPhone.

(Image credit: AltStore)

You'll also be able to set the likes of AltStore (assuming it gets approval) as your iPhone's default app store, and manage them in Settings. As Apple states in its explainer about the app changes, “users can manage their list of allowed marketplace developers and their marketplace apps in Settings and remove them at any time”. 

Your default third-party app store will integrate with some iPhone features like Spotlight, to help you find and use the apps. But if you delete that non-Apple App Store, this will also delete “all related data from the device and stop updates for apps from that marketplace”.

A seismic change coming to your apps

Browsing the App Store on an iPhone.

(Image credit: Jaap Arriens/NurPhoto via Getty Images)

The momentous change in Apple’s App Store policy will be implemented in iOS 17.4, which is currently in beta and is due for a full release in March. 

Anyone in the E.U. will be able to install apps from third-party stores, and any developer will be able to release their own app store as long as they meet Apple’s requirements for fraud prevention, customer service and experience, and can provide a €1m credit note attesting to its ability to guarantee user support. However, despite the potential for this move to upend the way European users get their apps, there are a few catches attached to it.

For instance, Apple says that restrictions you place on in-app purchases using iOS’s Screen Time feature will not work in third-party app stores. Likewise, Family Purchase Sharing will be limited, as will the Ask to Buy feature, while universal purchases – where apps you buy work across various Apple platforms – won’t be available. That’s because Apple won’t be facilitating payments on third-party stores, so won’t be able to implement these features. The company also says it won’t be able to help users with refunds, purchase history, subscription management, and more.

Apple has fought tooth and nail against this change, but its hand was forced by the E.U.’s Digital Markets Act (DMA), which will start levying hefty fines against companies that don’t open up their platforms from March onwards. Apple says this move is likely to provide “new avenues for malware, fraud and scams, illicit and harmful content, and other privacy and security threats,” and that it won’t be lifting its App Store restrictions anywhere outside the EU. It’s possible the company might even be able to stop you bypassing the geolocation restrictions using a VPN, too.

That said, opening up iOS in this way could lead to some more positive changes. Web browsers on iOS won’t be forced to use Apple’s WebKit engine, for example, and users will be given greater ability to change their default browser. Payment apps will also gain access to Apple’s NFC system, which could mean we start to see contactless alternatives to Apple Pay popping up.

With the EU breathing down its neck, Apple has been forced to begrudgingly make these changes. That could prompt other jurisdictions around the world to consider passing their own app store laws, finally blasting a hole through Apple’s long-standing walled garden. That’s perhaps something for the future – for now, AltStore has shown us what that future could look like.

You might also like

TechRadar – All the latest technology news

Read More