Vision Pro spatial Personas are like Apple’s version of the metaverse without the Meta

While the initial hype over Apple Vision Pro may have died down, Apple is still busy developing and rolling out fresh updates, including a new one that lets multiple Personas work and play together.

Apple briefly demonstrated this capability when it introduced the Vision Pro and gave me my first test-drive last year but now spatial Personas is live on Vision Pro mixed-reality headsets.

To understand “spatial Personas” you need to start with the Personas part. You capture these somewhat uncanny valley 3D representations of yourself using Vision Pro's spatial (or 3D) cameras. The headset uses that data to skin a 3D representation of you that can mimic your face, head, upper torso, and hand movements and be used in FaceTime and other video calls (if supported).

Spatial Personas does two key things: it gives you the ability to put two (or more) avatars in one space and lets them interact with either different screens or the same one and does so in a spatially aware space. This is all still happening within the confines of a FaceTime call where Vision Pro users will see a new “spatial Persona” button.

To enable this feature, you'll need the visionOS 1.1 update and may need to reboot the mixed reality headset. After that you can at any time during a FaceTime Persona call tap on the spatial icon to enable the featue.

Almost together

Apple Vision Pro spatial Personas

(Image credit: Apple)

Spatial Personas support collaborative work and communal viewing experiences by combining the feature with Apple's SharePlay. 

This will let you “sit side-by-side” (Personas don't have butts, legs or feet, so “sitting” is an assumed experience) to watch the same movie or TV show. In an Environment (you spin the Vision Pro's digital crown until your real world disappears in favor of a selected environment like Yosemite”) you can also play multi-player games. Most Vision Pro owners might choose “Game Room”, which positions the spatial avatars around a game table. A spatial Persona call can become a real group activity with up with five spatial Personas participating at once.

Vision Pro also supports spatial audio which means the audio for the Persona on the right will sound like it's coming from the right. Working in this fashion could end up feeling like everyone is in the room with you, even though they're obviously not.

Currently, any app that supports SharePlay can work with spatial Personas but not every app will allow for single-screen collaboration. If you use window share or share the app, other personas will be able to see but not interact with your app window.

Being there

Apple Vision Pro spatial Personas

Freeform lets multiple Vision Pro spatial Personas work on the same app. (Image credit: Apple)

While your spatial Personas will appear in other people's spaces during the FaceTime call, you'll remain in control of your viewing experience and can still move your windows and Persona to suit your needs, while not messing up what people see in the shared experience.

In a video Apple shared, it shows two spatial Personas positioned on either side of a Freeform app window, which is, in and of itself somewhat remarkable. But things take a surprising turn when each of them can reach out with their Persona hands to control the app with gestures. That feels like a game-changer to me.

In some ways, this seems like a much more limited form of Meta CEO Mark Zuckerberg's metaverse ideal, where we live work and play together in virtual reality. In this case, we collaborate and play in mixed reality while using still somewhat uncanny valley avatars. To be fair, Apple has already vastly improved the look of these things. They're still a bit jarring but less so than when I first set mine up in February.

I haven't had a chance to try the new feature, but seeing those two floating Personas reaching out and controlling an app floating a single Vision Pro space is impressive. It's also a reminder that it's still early days for Vision Pro and Apple's vision of our spatial computing future. When it comes to utility, the pricey hardware clearly has quite a bit of road ahead of it.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT just took a big step towards becoming the next Google with its new account-free version

The most widely available (and free) version of ChatGPT, ChatGPT-3.5, is being made available to use without having to create and log into a personal account. That means you can have conversations with the AI chatbot without it being tied to personal details like your email. However, OpenAI, the tech organization behind ChatGPT, limits what users can do without registering for an account. For example, unregistered users will be limited in the kinds of questions they can ask and in their access to advanced features. 

This means there are still some benefits to making and using a ChatGPT account, especially if you’re a regular user. OpenAI writes in an official blog post that this change is intended to make it easy for people to try out ChatGPT and get a taste of what modern AI can do, without going through the sign-up process. 

In its announcement post on April 1, 2024, OpenAI explained that it’s rolling out the change gradually, so if you want to try it for yourself and can’t yet, don’t panic. When speaking to PCMag, an OpenAI spokesperson explained that this change is in the spirit of OpenAI’s overall mission to make it easier for people “to experience ChatGPT and the benefits of AI.”

Woman sitting by window, legs outstretched, with laptop

(Image credit: Shutterstock/number-one)

To create an OpenAI account or not to create an OpenAI account

If you don’t want your entries into the AI chatbot to be tied to the details you would have to disclose when setting up an account, such as your birthday, phone number, and email address, then this is a great development. That said, lots of people create dummy accounts to be able to use apps and web services, so I don’t think it’s that hard to circumvent, but you’d have to have multiple emails and phone numbers to ‘burn’ for this purpose. 

OpenAI does have a disclaimer that states that it is storing your inputs to potentially use to improve ChatGPT by default whether you’re signed in or not, which I suspected was the case. It also states that you can turn this off via ChatGPT’s settings, and this can be done whether you have an account or not.

If you do choose to make an account, you get some useful benefits, including being able to see your previous conversations with the chatbot, link others to specific conversations you’ve had, make use of the newly-introduced voice conversational features, custom instructions, and the ability to upgrade to ChatGPT Plus, the premium subscription tier of ChatGPT which allows users to use GPT-4 (its latest large language learning (LLM) model). 

If you decide not to create an account and forgo these features, you can expect to see the same chat interface that users with accounts use. OpenAI will also be putting in additional content safeguards for users who aren’t logged in, detailing that it’s put in measures to block prompts and generated responses in more categories and topics. Its announcement post didn’t include any examples of the types of topics or categories that will get this treatment, however.

Man holding a phone which is displaying ChatGPT is, prototype artificial intelligence chatbot developed by OpenAI

(Image credit: Shutterstock/R Photography Background)

An invitation to users, a power play to rivals?

I think this is an interesting change that will possibly tempt more people to try ChatGPT, and when they try it for the first time, it can seem pretty impressive. It allows OpenAI to give users a glimpse of its capabilities, which I imagine will convince some people to make accounts and access its additional features. 

This will continue expanding ChatGPT’s user pool that may choose to go on and become ChatGPT Plus paid subscribers. Perhaps this is a strategy that will pay off for OpenAI, and it might institute a sort of pass-it-down approach through the tiers as it introduces new generations of its models.

This easier user accessibility could mean the type of user growth that could see OpenAI become as commonplace as Google products in the near future. One of Google Search’s appeals, for example, is that you could just fire up your browser and make a query in an instant. It’s a user-centric way of doing things, and if OpenAI can do something similar by making it that easy to use ChatGPT, then things could get seriously interesting.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft Edge could soon get its own version of Google’s Circle to Search feature

As the old saying goes, “Imitation is the sincerest form of flattery”. Microsoft is seemingly giving Google a huge compliment as new info reveals the tech giant is working on its own version of Circle to Search for Edge.

If you’re not familiar, Circle to Search is a recently released AI-powered feature on the Pixel 8 and Galaxy S24 series of phones. It allows people to circle objects on their mobile devices to quickly look them up on Google Search. Microsoft’s rendition functions similarly. According to the news site Windows Report, it’s called Circle To Copilot. The way it works you circle an on-screen object with the cursor – in this case, it’s an image of the Galaxy S24 Ultra

Immediately after, Copilot appears from the right side with the circled image attached as a screenshot in an input box. You then ask the AI assistant what the object is in the picture, and after a few seconds, it’ll generate a response. The publication goes on to state the tool also works with text. To highlight a line, you will also need to draw a circle around the words.

Windows Report states Circle To Copilot is currently available on the latest version of Microsoft Edge Canary which is an experimental build of the browser. It’s meant for users or developers who want early access to potential features. The publication has a series of instructions explaining how you can activate Circle To Copilot. You'll need to enter a specific command into the browser's Properties menu.

If the command works for you, Circle To Copilot can be enabled by going to the Mouse Gesture section of Edge’s Settings menu and then clicking the toggle switch. It’s the fourth entry from the top.

Work in progress

We followed Windows Report's steps ourselves; however, we were unable to try out the feature. All we got was an error message stating the command to activate the tool was not valid. It seems not everyone who installs Edge Canary will gain access, although this isn’t surprising. 

The dev browser is, not surprisingly, unstable. It’s a testing ground for Microsoft so things don’t always work as well as they should; if at all. It is possible Circle To Copilot will function better in a future patch, however, we don’t know when that will be rolling out. We are disappointed the feature was inaccessible on our PC because we had a couple of questions. Is this something that needs to be manually triggered on Copilot? Or will it function like Ask Copilot where you highlight a piece of content, right-click it, and select the correct option in the context menu?

Out of curiosity, we installed Edge Canary on our Android phone to see if it had the update. As it turns out, no. It may be Circle To Copilot is exclusive to Edge on desktop, but this could change in the future.

Be sure to check TechRadar's list of the best AI-powered virtual assistant for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Think Windows 11 is too bloated? This 100MB version could be worth a try – or drive you bananas

NTDEV, the team behind the Tiny11, is back and has achieved an incredible feat – compressing Windows 11 down to just 100MB. While impressive, we wouldn’t recommend trying to run the developer's latest take on Microsoft’s latest operating software because well – it's a bit bare, to say the least. To shrink Windows 11 to such a size they’ve had to strip away much of what we’re familiar with and reduce it down to a text-only version.   

In a YouTube video posted by NTDEV (via PC gamer) you can get a better idea of what it looks like. Gone is the normal GUI (graphical user interface) that we all know and love (well, depending on who you ask) and it has been replaced with an almost entirely black background and lines of white text – essentially turning Windows 11 into a command-line operating system like DOS (an old PC operating system which was popular before Windows 3.1 arguably killed it off).

So, there’s no windows, no colorful greeting screen, and no desktop. You won’t have a menu to select from or a taskbar to search for apps- instead, you’ll have to write exactly what you want to do, similar to how you would the command line app of your PC. 

There are no pre-installed apps either, of course, so forget about firing up Microsoft Paint. With the GUI gone, you lose everything except the very bare bones of Windows 11. Of course, NTDEV is not doing this to allow people to download and use the itty bitty OS for their everyday lives, but instead to just show that it is even possible. Most people who work office jobs or in fields that require daily computer use probably don’t want to add hours to their work week having to type in a command prompt to bring up everything they’d normally be able to access with a single mouse click. 

This could be a fun project, however, for users who’ve always wanted to bring newer versions of Windows to life on some very old computers. Nick Evanson of PC Gamer makes a point that most people are probably not thrilled with AI making a jump to almost every app and potentially future generations of Windows (more so than we’re seeing already), so perhaps this is a potential solution for users who want to go back to the basics – like, very basic. 

Still, it's a very cool ‘proof of concept’ to see and makes one nostalgic for 1980’s computing aesthetics, and could provide a point of reflection for everyone to look back at how far we’ve come in the world of computing. However, I do prefer my Windows to actually have windows!

You might also like…

TechRadar – All the latest technology news

Read More

Meta is ‘working on’ its own version of Apple Vision Pro Travel Mode so you can use your Quest 3 on a plane

Meta might have been a big – perhaps even the biggest – player in the VR world for some time, but newcomer Apple is already giving it ideas for features to add to its Quest headsets, and the first of these could be the ability to use your Quest VR wearable while moving in a car or on a plane.

The reveal trailer for the Apple headset – which is finally set to launch on February 2, with Vision Pro preorders already having gone live (many Apple headsets are already being sold on eBay for extremely high prices) – showed off several use cases for the gadget. One of the examples was a person slipping the headset on while sitting in a plane seat, presumably so they could enjoy an immersive experience while traveling.

Using a VR headset while traveling – especially on a packed plane – sounds like a no-brainer. Rather than having to contend with movies displayed on a small screen on the back of the seat in front of you, you can enjoy them on a massive virtual movie theatre screen and forget that you’re crammed into coach like a sardine.

A woman wearing the Apple Vision Pro while on a plane with other passengers next to them.

(Image credit: Apple)

However, while the idea sounds simple, it’s rather tricky to pull off – as one disappointed Meta Quest 3 user discovered when they struggled to use mixed reality on a flight. On Twitter/X, user @afoxdesign posted a rather amusing clip of their Quest 3 menu floating off into the distance while trying to use the headset on a flight.

In a reply to the post, Meta CTO Andrew Bosworth (@boztank) explained that the issue is caused by the plane’s movement throwing off the headset’s IMUs (inertial motion sensors). The sensors are picking up on the plane’s movement and acceleration, so your headset thinks you’re moving about and adjusts the position of virtual objects accordingly.

Encouragingly, Bosworth added that Meta is “Working on it” with regards to making it possible to use Quest headsets while traveling in a vehicle.

See more

Back in May 2023 Meta showed off a demo where a Meta Quest Pro was being used in a BMW, with the car’s own sensors keeping the headset's IMUs in check. Unfortunately, this solution wouldn’t work for low-tech vehicles or commercial planes, where it might not be the safest idea to give random people direct access to the airplane’s sensors.

Option two, then, may be to introduce a simplified travel mode in which these motion sensors are turned off. Instead, the headset would use scaled-back tracking data and reference points to enable stable versions of static experiences like watching a video or playing a game through the VR Xbox Game Pass app – becoming a headset version of the Xreal Air 2 and similar wearable AR display glasses.

We’ll have to wait and see what Meta comes up with, but with Apple offering a solution to the using-a-headset-while-traveling problem, and Bosworth saying that a solution is being worked on, we’re hopeful that Quest headsets will be usable on a plane or in a car in the not too distant future.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT will get video-creation powers in a future version – and the internet isn’t ready for it

The web's video misinformation problem is set to get a lot worse before it gets better, with OpenAI CEO Sam Altman going on the record to say that video-creation capabilities are coming to ChatGPT within the next year or two.

Speaking to Bill Gates on the Unconfuse Me podcast (via Tom's Guide), Altman pointed to multimodality – the ability to work across text, images, audio and “eventually video” – as a key upgrade for ChatGPT and its models over the next two years.

While the OpenAI boss didn't go into too much detail about how this is going to work or what it would look like, it will no doubt work along similar lines to the image-creation capabilities that ChatGPT (via DALL-E) already offers: just type a few lines as a prompt, and you get back an AI-generated picture based on that description.

Once we get to the stage where you can ask for any kind of video you like, featuring any subject or topic you like, we can expect to see a flood of deepfake videos hit the web – some made for fun and for creative purposes, but many intended to spread misinformation and to scam those who view them.

The rise of the deepfakes

Deepfake videos are already a problem of course – with AI-generated videos of UK Prime Minister Rishi Sunak popping up on Facebook just this week – but it looks as though the problem is about to get significantly worse.

Adding video-creation capabilities to a widely accessible and simple-to-use tool like ChatGPT will mean it gets easier than ever to churn out fake video content, and that's a major worry when it comes to separating fact from fiction.

The US will be going to the polls later this year, and a general election in the UK is also likely to happen at some point in 2024. With deepfake videos purporting to show politicians saying something they never actually said already circulating, there's a real danger of false information spreading online very quickly.

With AI-generated content becoming more and more difficult to spot, the best way of knowing who, and what, to trust is to stick to well-known and reputable publications online for your news sources – so not something that's been reposted by a family member on Facebook, or pasted from an unknown source on the platform formerly known as Twitter.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 could get next-gen USB4 Version 2.0 support with speeds of up to 80Gbps

Windows 11 could soon benefit from super-fast USB devices, as Microsoft is currently testing support of a new 80Gbps USB standard.

This will be the successor to USB4, capable of delivering data transfer speeds of up to 80 Gbps (doubling USB4’s speed) and is known as USB4 Version 2.0. The preview was released through Microsoft’s Dev Channel in the Windows Insider Program, Microsoft’s own community for professionals and Windows enthusiasts to try out new features and versions of Windows OSs and provide feedback.

The testing will be constrained to a very, very limited number of users for now because to facilitate this USB speed standard, your PC will have to have one of Intel’s most cutting-edge processors, the Intel Core 14th Gen HX-series mobile processors. 

This line up of processors was only just announced at CES 2024 on January 8, so very few users will have access to them in the present. 

As Microsoft details in its Windows Insider Blog, this is the first substantial update to the USB4 standard, doubling USB transfer speeds from 40Gbps to 80Gbps. Here’s what Microsoft had to say, expanding on what this development will mean for future devices: 

This is the first major version update of the USB4 standard and increases performance to 80Gbps from 40Gbps. It enables the next generation of high-performance displays, storage, and connectivity. It is fully backwards compatible with peripherals built for older generations of USB and Thunderbolt and works alongside all other USB Type-C features.

Microsoft Teams copilot

(Image credit: Microsoft Teams)

What else Microsoft is testing out right now?

The Windows 11 Preview Build 23615 offers testers a crop of new features including USB4 Version 2.0. One other introduction besides the USB speed upgrade that Microsoft is looking into is launching Copilot automatically when Windows 11 starts up, specifically for widescreen devices (no specifying exactly what qualifies as wide enough for this). Windows observers don't seem so hot on this prospect, and it seems like Microsoft knew this was likely and provided instructions on how to disable it: Settings > Personalization > Copilot. 

In this build, Microsoft also added apps that you can share URLs directly to via the Windows share window, namely WhatsApp, Gmail, X (formerly Twitter), Facebook, and LinkedIn. If you’d like to try this in Microsoft Edge, for instance, you do have to first enable the Share button as it’s disabled by default. You can do this by going to the three dot icon in the top right of Edge, going to Settings, scrolling down to “Select which buttons to show on the toolbar:” and toggling on for the Share button to be displayed.

Then, you’ll need to highlight or copy the link you want to share, and then click the Share button to the right of the address bar (which will be grayed out at first but then darken when you’ve selected a link).

While the above build is still being rolled out to Windows Insiders in the Dev Channel, these new features will also be made available through a gradual rollout in Beta build 22635.3061 via the Beta Channel of the Windows Insider Program.  Users who install this build will need to turn the toggle on to enable the new features if they’d like to try them. Thurott.com has detailed this and more features and preview build versions that have just been released that Windows Insiders can try out now. 

It’ll be a little while before we start seeing the effects of the USB4 Version 2.0 standard and you’ll have to get one of the newest Windows PCs available to see it for yourself. It sounds very promising and will likely improve users’ experiences when USB4 Version 2.0 devices and accessories start to roll out.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

I’d love it if Apple dropped the Vision Pro’s worst feature to make a cheaper version – it’s a win-win

Well, the Apple Vision Pro isn’t even available yet, and Apple is already looking to the future (well, the further future) to consider how to make the follow-up model even better. Or, in this case, even cheaper: according to a new report from Bloomberg’s Mark Gurman, Apple wants its second headset to cost less.

We already knew Apple was likely considering a cheaper Vision Pro model, and this latest news gives credence to those rumors. It seems like the first-generation Vision Pro is having some teething issues too, most notably the weight – the same Gurman report discusses Apple’s efforts to reduce the weight of a second-generation headset. But there’s a far better nugget of information we should be focusing on…

Apple might get rid of that horrible external display! Yes, the EyeSight feature that I heavily criticized when the headset was first revealed might be cut from future Vision models in order to save on costs. It makes a lot of sense; adding an external screen to show the user’s eyes is a ridiculous feature that no doubt costs a lot of money, adding extra (entirely unneeded) OLED display tech to every single headset. The screen even has lenticular glass to create an illusion of depth.

The feature isn’t even that helpful; it projects the eyes of your uncanny-valley FaceTime avatar onto the exterior screen when you’re looking through the headset’s external cameras, or displays an opaque color when you’re immersed in something that blocks out your surroundings. In other words, it tells other people if a Vision Pro user can see them or not – something that could’ve probably been achieved with a single LED.

Get ready for the Apple Vision… something

Apple is reportedly targeting a $ 1,500-$ 2,000 price range for its more affordable Vision headset, which sounds a lot more accessible than the current $ 3,499 price tag. Given that some other popular VR headsets – like the new Meta Quest 3 – are a lot cheaper, many potential users are likely to give the first-gen Vision Pro a pass.

A cheaper model was likely always going to happen given the name Apple chose for its headset – Vision Pro implies the (future) existence of a standard Apple Vision headset, in keeping with Apple’s naming conventions for its other products. Just look at the iPhone 15 Pro and the MacBook Pro – hey, maybe the cheaper headset will be called the Apple Vision Air?

Apple is also reportedly developing a second high-end version of the headset (a Vision Pro Max, perhaps?), and it’s enough to make me wonder if the first-gen model will even be worth buying at all. The Vision Pro represents Apple’s first step into an entirely new market, and it wouldn’t even surprise me if it gets hit with delays – after all, it’s suspected that issues with chip manufacturer TSMC could delay M3 MacBook Air models until the middle of next year, and the Vision Pro features not just the M2 chip but also a new dedicated R1 chip for mixed-reality workloads.

In any case, I’ll be happy to see a less expensive headset from Apple. I still remember the noise the crowd made at WWDC 2023 when the price was unveiled… and let’s face it, EyeSight is a feature nobody really needs.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s free version of Firefly finally exits beta – here’s how to access it

Adobe has announced it is expanding the general availability of its Firefly generative AI tool on the company’s free Express platform. 

More specifically, the Text to Image and Text Effects tools are finally exiting their months-long beta. The former, as the name suggests, allows users to create unique images just by entering a word prompt such as horses galloping or a monster in a forest. The latter lets people create floating text bubbles with fonts sporting special effects. These two are mainly used to create compelling content for a variety of use cases from enhancing plain-looking resumes to marketing material. Apparently the tools were a huge hit with users during the beta.

Firefly’s text features are available in over 100 different languages from Spanish, French, Japanese, and of course, English. What’s interesting is Adobe tells us the AI is “safe for commercial use.” Presumably, this means the model won’t generate anything inappropriate or totally random. What it does generate will fit the prompt you entered. 

How to use Firefly

Using the generative AIs is very easy to do. It honestly takes no time at all. First, head on over to the Adobe Express website, and then create an account if you haven’t done so already. Scroll down a little on the front page, and you’ll see the creation tools primed and ready to go.  

Adobe Express website

(Image credit: Future)

Enter whatever text prompt you have in mind, give Adobe Express a few seconds to generate the content, and you’re set. You can then edit the image further if you’d like via the kit on the left-hand side.

Adobe Firefly

(Image credit: Future)

Future updates

The rest of the Firefly update is mainly geared towards an entrepreneurial audience. Subscribers to either Adobe Creative Cloud or Express Premium will begin to receive Generative Credits that can be used to have Firefly create content. Additionally, the AI is being integrated into an Adobe asset library for businesses. There aren’t any new features for everyday, casual users – at least not right now. 

Adobe states it has plans to expand its Express platform within the coming months. Most notably, it wants to bring the “latest version” to mobile devices. So we might see the Firefly AI on smartphones by the end of the year. We reached out to Adobe for clarification. This story will be updated at a later time.

While we have you, be sure to check out TechRadar’s list of the best AI art generators for 2023. Any one of these is a good alternative for Firefly.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Microsoft reminds Windows 11 users on original version that they’ll soon be forced to upgrade

Are you still running Windows 11 21H2? The original version of Windows 11 is about to run out of road for support, and Microsoft has reminded us that users are going to have to upgrade to a newer version imminently.

Neowin spotted that Microsoft has updated its release health dashboard to make things clear for those on Windows 11 21H2 (Home and Pro, plus Pro variants).

The company reminds us that support ends on October 10, 2023, and that the cumulative security update for October, to be released on that day, will be the last ever update that Windows 11 21H2 receives.

Microsoft further clarifies: “The September 2023 non-security preview update will be the last optional release and the October 2023 security update will be the last security release for Windows 11, version 21H2. Windows 11, version 22H2 will continue to receive security and optional releases.”


Analysis: Only one road ahead

Users on 21H2 will therefore be pushed to upgrade to 22H2 and Windows 11 will automatically fire up the update to do so when this end date rolls around – or up to a couple of months before that. So, if you are still on Windows 11 21H2, you might experience this forced upgrade very soon.

It is, of course, of paramount importance that your copy of Windows 11 remains in date and keeps up with the flow of security fixes, otherwise your PC could be open to being exploited by hackers and opportunists out there.

If Windows 11 23H2 emerges very soon, it’s possible you could get pushed to move to that instead of 22H2. However, we don’t think that’s too likely – although it could arrive later this month, as we’ve previously observed, but most rumors have it penciled in for Q4, which of course means October at the soonest, and quite possibly not early in the month. We shall see.

You might also like

TechRadar – All the latest technology news

Read More