Frustrations are being aired about Windows 11’s new Copilot app – but here’s why we’re not worried (just yet)

Microsoft is seemingly going backwards with Copilot in Windows 11, and things certainly don’t look great in testing for the AI assistant right now.

Windows Latest spearheads a complaint – echoed elsewhere by other denizens of various forums and social media outlets – that the latest incarnation of Copilot sees Microsoft ‘downgrading’ the AI to a “Microsoft Edge-based web wrapper” (we’ll come back to that point shortly).

To take a step back for a moment, this is all part of Microsoft’s recent move – announced in May 2024, and implemented in June – to switch Copilot from being an assistant anchored to a side panel (on the right) to a full app experience (a window you can move around the desktop, resize and so on, like a normal app basically).

As Windows Latest points out, in the latest update for Windows 11 (in testing), changes that are rolling out to some users turn Copilot into a basic web app – although in fact, Copilot has always been a web app (even when in its previous incarnation as a locked side panel, before the standalone app idea came about).

What the tech site is really complaining about is how basic and transparent Copilot’s nature really is in this freshly deployed take. This means the Copilot window shows Edge menus and options, and just opens copilot.microsoft.com in an Edge tab – and you can even open any old website in the Copilot app with a bit of fudging and a few clicks here and there. And all that feels rather disappointing and basic, of course.


Acer Swift laptop showing the Copilot key

(Image credit: Future / James Holland)

Analysis: Strip it back, then build it up

We get the criticism here, although as noted, all that’s really happening is that Copilot is being more obviously exposed for what it is – a simple web app that basically just pipes you through to the same AI chatbot experience you get with the Copilot website.

However, there is a twist here – namely that the extra options Copilot offered for manipulating Windows 11 settings in some respects (in the pre-standalone app days) have reportedly been ditched. Not that these abilities were any great shakes to begin with – they’ve always been fairly limited – but still, it does feel like a step back to see them vanish.

Ultimately, this leaves the new Copilot experience in Windows 11 feeling very disjointed and not at all well integrated into the OS – just slapped on top, really. However, we do have to remember that this is still in testing.

Stripping features back in preview can be expected – even if it isn’t a pretty sight right now, presumably Microsoft is going to build it back up, make the new Copilot app more seamless, and reintroduce those powers related to Windows settings. In fact, we’d be shocked if that didn’t happen…

Unless Microsoft does have plans to make Copilot a more basic entity in Windows 11, but that seems very unlikely unless many more future AI powers are going to be forked off exclusively for Copilot+ PCs, perhaps (like Recall – which is another controversial topic in itself).

Time will tell, but eventually, we expect Copilot to become a more well-rounded and seamless app, and crucially, when powerful NPUs become more widespread, the AI assistant will be able to perform a good deal more AI workloads on-device (rather than hooking up to the cloud to get the necessary processing power). That’s when a more fully-fledged app with greater powers to operate locally will likely become a reality.

In its current format, though, which has always been pretty basic, Copilot in Windows 11 doesn’t really need to be any more than a simple web wrapper.

You might also like…

TechRadar – All the latest technology news

Read More

Watch the AI-produced film Toys”R”Us made using OpenAI’s Sora – and get misty about the AI return of Geoffrey the Giraffe

Toys”R”Us premiered a film made with OpenAI's artificial intelligence text-to-video tool Sora at this year's Cannes Lions Festival. “The Origin of Toys”R”Us” was produced by the company's entertainment production division Toys”R”Us Studios, and creative agency Native Foreign, who scored alpha access to Sora since OpenAI hasn't released it to the public yet. That makes Toys”R”Us one of the first brands to leverage the video AI tool in a major way

 “The Origin of Toys”R”Us” explores the early years of founder Charles Lazarus in a rather more whimsical way than retail giants are usually portrayed. Company mascot Geoffrey the Giraffe appears to Lazarus in a dream to inspire his business ambitions in a way that suggests huge profits were an unrelated side effect (at least until relatively recently) for Toys”R”Us.

“Charles Lazarus was a visionary ahead of his time and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” four-time Emmy Award-winning producer and President of Toys”R”Us Studios Kim Miller Olko said in a statement. “Partnering with Native Foreign to push the boundaries of OpenAI's Sora is truly exciting. Dreams are full of magic and endless possibilities, and so is Toys”R”Us.”

Sora Stories and the uncanny valley

Sora can generate up to one-minute-long videos based on text prompts with realistic people and settings. OpenAI pitches Sora as a way for production teams to bring their visions to life in a fraction of the usual time. The results can be breathtaking and bizarre.

For “The Origin of Toys”R”Us,” the filmmakers condensed hundreds of iterative shots into a few dozen, completing the film in weeks rather than months. That said, the producers did use some corrective visual effects and added original music composed indie rock band Copeland's Aaron Marsh.

The film is brief and its AI origins are only really obvious when it is paused. Otherwise, you might think it was simply the victim of an overly enthusiastic editor with access to some powerful visual effects software and actors who don't know how to perform in front of a green screen.

Overall, it manages to mostly avoid the uncanny valley except for when the young founder smiles, then it's a little too much like watching “The Polar Express.” Still, when considering it was produced with the alpha version of Sora and with relatively limited time and resources, you can see why some are very excited about Sora.

“Through Sora, we were able to tell this incredible story with remarkable speed and efficiency,” Native Foreign Chief Creative Officer and the film's director Nik Kleverov said in a statement. “Toys”R”Us is the perfect brand to embrace this AI-forward strategy, and we are thrilled to collaborate with their creative team to help lead the next wave of innovative storytelling.”

The debut of “The Origin of Toys”R”Us” at the Cannes Lions Festival underscores the growing importance of AI tools in advertising and branding. The film acts as a new proof of concept for Sora. And it may portend a lot more generative AI-assisted movies in the future. That said, there's a lot skepticism and resistance in the entertainment world. Writers and actors went on strike for a long time in part because of generative AI, and the new contracts included rules for how companies can use AI models. The world premiere of a movie written by ChatGPT had to be outright canceled over complaints about that aspect, and if Toys”R”Us tried to make its film available in theaters, it would probably face the same backlash.

You might also like

TechRadar – All the latest technology news

Read More

YouTube’s Stable Volume is now on Android TV devices – here’s everything you need to know about the update your ears may love

Weird audio mixing is a really annoying problem. How many times have you watched a video or movie where the audio sounds fine only for the dialog to be super quiet? 

Google is helping audiences out by expanding YouTube’s Stable Volume feature from the mobile app to “Android TV and Google TV devices.” It's a handy tool that automatically adjusts “the volume of videos you watch,” all without requiring you to pick up your remote, according to 9To5Google.

That story explains that 'Stable Volume' ensures a consistent listening experience “by continuously balancing the volume range between quiet and loud parts” in a video. After installing YouTube version 4.40.303 on their Android TV display, they discovered the feature. 

If you select the gear icon whenever a video is playing, you should see Stable Volume as an option within the Settings menu. It’ll sit in between Captions and the playback speed function.

Stable Volume on Android TV

(Image credit: Google/9To5Google)

It’s turned on by default, but you can deactivate it at any time just by selecting it while watching content. 9To5Google recommends turning off Stable Volume while listening to music or playing a video with a “detailed audio mix.” Having it activated then could potentially mess with the sound quality. Plus, YouTube Music isn't on Android TV or Google TV hardware, so you won't have a dedicated space specifically for songs.

We should mention that the official YouTube Help page for Stable Volume states it isn’t available for all videos, nor will music be negatively affected. We believe this note is outdated because it also says the tool is exclusive to the YouTube mobile app. It’s entirely possible the versions on Android TV and Google TV could behave differently.

Be sure to keep an eye out for the patch when it arrives. It joins other YouTube on TV features launched in 2024 such as Multiview and the auto-generated key moments.

Check out TechRadar's list of the best TV for 2024. We cover a wide array of models for different budgets.

You might also like

TechRadar – All the latest technology news

Read More

Excited about Apple Intelligence? The firm’s exec Craig Federighi certainly is, and has explained why it’ll be a cutting-edge AI for security and privacy

Reactions to Apple Intelligence, which Apple unveiled at WWDC 2024, have ranged from curious to positive to underwhelmed, but whatever your views on the technology itself, a big talking point has been Apple’s emphasis on privacy, in contrast to some companies that have been offering generative AI products for some time. 

Apple is putting privacy front and center with its AI offering and has been keen to talk about how Apple Intelligence – which will be integrated across iOS 18, iPadOS 18, and macOS Sequoia – would differ from its competitors by adopting a fresh approach to handling user information.

Craig Federighi, Apple’s Senior Vice President of Software Engineering, and the main presenter of the WWDC keynote, has been sharing more details about Apple Intelligence, and the company’s privacy-first approach.

Speaking to Fast Company, Federighi explained more about Apple’s overall AI ambitions, confirming that Apple is in agreement with other big tech companies that generative AI is the next big thing – as big a thing as the internet or microprocessors were when they first came about – and that we’re at the beginning of generative AI’s evolution. 

WWDC

(Image credit: Apple)

Apple's commitment to AI privacy

Federighi told Fast Company that Apple is aiming to “establish an entirely different bar” to other AI services and products when it comes to privacy. He reinforced the messaging in the WWDC keynote that the personal aspect of Apple Intelligence is foundational to it and that users’ information will be under their control. He also reiterated that Apple wouldn’t be able to access your information, even while its data centers are processing it. 

The practical measures that Apple is taking to achieve this begin with its lineup of Apple M-series processors, which it claims will be able to run and process many AI tasks on-device, meaning your data won’t have to leave your system. For times when that local processing power is insufficient, the task at hand will be sent to dedicated custom-built Apple servers utilizing Private Cloud Compute (PCC), offering far more grunt for requests that need it – while being more secure than other cloud products in the same vein, Apple claims.

This will mean that your device will only send the minimum information required to process your requests, and Apple claims that its servers are designed in such a way that it’s impossible for them to store your data. This is apparently because after your request is processed and returned to your device, the information is ‘cryptographically destroyed’, and is never seen by anyone at Apple. 

Apple has published a more in-depth security research blog post going into more detail about PCC, which, as noted at WWDC 2024, is a system available to independent security researchers, who can access Apple Intelligence servers in order to verify Apple’s privacy and security claims around PCC.

Apple wants AI to feel like a natural, almost unnoticeable part of its software, and the tech giant is clearly keen to win the trust of those who use its products and to differentiate its take on AI compared with that of rivals. 

WWDC presentation

(Image credit: Apple)

More about ChatGPT and Apple Intelligence in China

Federighi also talks about Apple’s new partnership with OpenAI and the integration of ChatGPT into its operating systems. This is being done to give users access to industry-standard advanced models while reassuring users that ChatGPT isn’t what powers Apple Intelligence; the latter is exclusively driven by Apple’s own large language models (LLMs), which are totally distinct on Apple’s platforms, but you will be able to enlist ChatGPT for more complex requests. 

ChatGPT is only ever invoked at the user’s request and with their permission, and before any requests are sent to ChatGPT you’ll have to confirm that you want to do this explicitly. Apple teamed up with OpenAI to give users this option because, according to Federighi, GPT-4o is “currently the best LLM out there for broad world knowledge.” 

Apple is also considering expanding this concept to include other LLM makers in the future so that you might be able to choose from a variety of LLMs for your more demanding requests. 

Federighi also talked about its plans for Apple Intelligence in China – the company’s second biggest market – and how the company is working to comply with regulations in the country to bring its most cutting-edge capabilities to all customers. This process is underway, but may take a while, as Federighi observed: “We don’t have timing to announce right now, but it’s certainly something we want to do.”

We’ll have to see how Apple Intelligence performs in practice, and if Apple’s privacy-first approach pays off. Apple has a strong track record when it comes to designing products and services that integrate so seamlessly that they become a part of our everyday lives, and it might very well be on track to continue building that reputation with Apple Intelligence.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Windows 11 users: get ready for lock screen widgets that might annoy you (but Microsoft is doing something about that)

Windows 11 and 10 users, you can breathe a sigh of relief for a moment, as there’s news that’s not about sticking more AI into the heart of Windows 11, or about Windows 10’s seemingly unavoidable end – although I don’t know if this development will be a cause for joy. Microsoft is fully rolling out MSN lock screen widgets after testing the feature for the past four months. 

Apparently, the feature is still in the process of being rolled out, so you may not see it quite yet, but these widgets should appear on your lock screen very soon (if they don’t already). Microsoft is implementing this change for Windows 11 and 10 via a server-side update, so the widgets will just suddenly appear – and so far, Windows Latest observes that users aren’t receiving them warmly.

Part of the problem is that the lock screen widgets displayed are pre-set by Microsoft, and they can’t be adjusted or modified to your preferences. The widgets appear if you switch them on, or already have the ‘Weather or more’ option turned on, in the Settings app. 

To be precise, you’ll find this option in the following location: 

Settings > Personalization > Lock Screen  

A selection of a screenshots of the Lock Screen section in the Settings app, allowing users to switch on the batch of widgets

(Image credit: Microsoft)

An all or nothing proposition – at least for now

The pre-configured MSN widgets include Microsoft Money, Sports, and Weather, but you can’t currently pick and choose which of these you’d like to keep and which to leave out. I imagine this is where a lot of the dissatisfaction with the feature comes from, as it feels that if you’d like widgets on your lock screen, but not all of them – well, it’s a case of tough luck. You’re forced to have them all, or none of them (if you switch them off).

Why can’t you adjust these widgets individually, turning off the ones you don’t like, as you can with other individual widgets such as Mail or Calendar? Well, the good news is that you’ll be able to do that before long, as Microsoft has promised this ability is inbound for Windows 11 and 10 users.

We don’t know when this important change is set to arrive, but hopefully, we’ll see this coming in sooner rather than later, as we can’t imagine it’s a huge task for Microsoft.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Maps is about to get a big privacy boost, but fans of Timeline may lose their data

One of Google Maps most popular features, Timeline, is about to become a lot more secure. To give you a quick refresher, Timeline acts as a diary of sorts that keeps track of all the past routes and trips you’ve taken. It’s a fun way to relive memories. 

Utilizing this tool requires people to upload their data to company servers for storage. That will change later this year though, as according to a recent email obtained by Android Police, Google will soon be keeping Timeline data on your smartphone.

Migrating Maps data over to localized device storage would greatly improve security as you won’t be forced to upload sensitive information to public servers anymore. However, due to the upcoming change, Google has decided to kill off Timeline for Web. Users have until December 1, 2024, to move everything from the online resource to their phone’s storage drive. Failure to take any action could result in losing valuable data, like moments from your timeline. 

“Google will try moving up to 90 days of Timeline data to the first signed-in device” after the cutoff date. However, anything older than 90 days will be deleted and it's important to take note of the wording. They’ll “try” to save as much as they can, meaning there is no guarantee Google will successfully migrate everything over if you miss the deadline. It’s unknown why this is the case, although we did ask.

Configuring Timeline

The company is asking people to review their Google Maps settings and choose which device will house their “saved visits and routes.” Their email offers a link to the app’s settings menu, but if you didn’t get the message you can navigate to Google Maps on your mobile device to make the changes there. It’s how we did it.

First, update Google Maps if you haven’t done so already, and then go to the Timeline section, where you’ll be greeted with a notification informing you of forthcoming changes. 

Then, click the Next button, and a new window will appear asking you how long you would like to keep your data. You can select to store the information until you get rid of it or set up an auto-delete function. Users can have Google Maps trash their Timeline after three, 18, or 36 months have passed.

Google Maps' new Timeline menu

(Image credit: Future)

Additionally, you can choose to back them up to Google servers. Android Police explains that this revamped system curates Maps Timelines for each device “independently.” So, if you buy a new smartphone and want to restore your data, using the backup tool is the best way.

What’s interesting is that the Timeline transfer is a one-way street. Google states in a Maps Help page that after the data is moved to your smartphone, you cannot revert back to the previous method. We experienced this firsthand because we couldn’t find a way to upload data to company servers outside of the backup function after localizing storage.

Don’t worry if you haven’t received the email or the Google Map patch as of yet. Android Police says the company is slowly rolling out the changes. Be sure to keep an eye out for either one.

While we have you check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Upgrade to Windows 11 or take the risk: Microsoft warns about Windows 10’s end-of-life date once again

Windows 10 might hold a share of around 70% of the overall Windows user base, but that’s not making Microsoft flinch when it comes to its plans to deprecate the fan-favorite operating system. The date when Windows 10 is going to stop receiving support and new updates has been set (it’s October 14, 2025), and current Windows 10 users are being reminded again. 

This isn’t the first time Microsoft has prodded users to upgrade to Windows 11 – far from it. Previously the company has shown full-screen multi-page reminders, and now, Microsoft has added an official web page detailing the inevitable.

The new ‘End of support’ page offers Microsoft’s advice and recommendations for transitioning to Windows 11 if you’re running Windows 10 (or an older Windows version than that like Windows 8.1 or Windows 7). 

Windows 8.1 and Windows 7 have already been ditched and haven’t been receiving updates for a long time, and Windows 10 will join them next year. The official page goes into detail about what will happen when support ends and what users can expect. 

The Windows 10-specific page has a prominent banner urging users to upgrade to Windows 11 for free if their PC is eligible. Microsoft also explains that Windows 10 users will no longer receive security or technical updates after October 2025. Their PCs will continue to work, but they won't get security updates and will be left open to potential security exploits, and so Microsoft recommends that they move on to Windows 11 (if their hardware allows the upgrade).

The dedicated transition page also has other linked pages detailing Windows 11’s features and how they’re an apparent improvement on Windows 10, as well as a straightforward comparison page between the two operating systems. There’s a page that even takes you through the process of how to shop for a new laptop, should you wish to upgrade to Windows 11 on a new device, and how you can back up your data on OneDrive to make sure you don’t lose it when you transition to a new machine. 

Microsoft is pretty insistent that you will need to get a device capable of running Windows 11, preferably a new one and, if you really want to make Microsoft happy, you can go for one of its brand new next-gen Copilot+ PCs

Microsoft presenting Surface Laptop and Surface Pro devices.

(Image credit: Microsoft)

So, what's next for Windows 10 users?

Windows 10 users who don’t want to migrate to Windows 11 will be faced with a difficult choice – switch to an alternative OS entirely (like Linux), or stick with Windows 10 and open up their PC to possible malware and security holes that don’t get resolved by updates after October 2025. These users will also not see any new features for their system or apps introduced through updates.

The other choice is to continue receiving critical security updates for Windows 10 by opting in for the Extended Security Updates (ESU) program for the operating system. It's intended to be a permanent fix, and its purpose is to offer a temporary solution. This is mainly for organizations and businesses while they transition to a newer operating system.

The pricing plans for individual users opting for the ESU program haven’t been revealed yet, but Windows Latest has learned that Microsoft will share this information later in the year. Businesses will pay $ 61 per device for year one (and that price will increase every year).

Many people just prefer Windows 10 to Windows 11, but there are also folks whose devices don’t meet the hardware requirements to run Microsoft’s newest OS. While there are workarounds for some PCs to fudge an installation of Windows 11, we wouldn’t necessarily recommend that course of action (and neither is it suitable for the less tech-savvy out there).

Microsoft might be eager for people to move on to its shiny new AI-driven Copilot+ PCs, but many people can’t afford a new computer right now, and, for the time being, Windows 10 works perfectly well. A lot of people aren’t that keen on Windows 11 either, due to some of its performance issues, perceived flaws in the operating system’s design, and Microsoft’s persistent effort to integrate AI features into multiple parts of the OS. 

I don’t know if Microsoft will be successful in converting more users to Windows 11 and its new line-up of PCs, but Windows 10 fans are reluctant to move on just yet. As to whether that will change next year, we’ll just have to see, but Windows 11 adoption appears to have stalled recently, so it’s not looking great for Microsoft. That said, Windows is still the most widely used desktop operating system in the world, and there’s no threat to its dominance that will mean Microsoft feels the heat in any meaningful way – for now.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Six major ChatGPT updates OpenAI unveiled at its Spring Update – and why we can’t stop talking about them

OpenAI just held its eagerly-anticipated spring update event, making a series of exciting announcements and demonstrating the eye- and ear-popping capabilities of its newest GPT AI models. There were changes to model availability for all users, and at the center of the hype and attention: GPT-4o. 

Coming just 24 hours before Google I/O, the launch puts Google's Gemini in a new perspective. If GPT-4o is as impressive as it looked, Google and its anticipated Gemini update better be mind-blowing. 

What's all the fuss about? Let's dig into all the details of what OpenAI announced. 

1. The announcement and demonstration of GPT-4o, and that it will be available to all users for free

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

The biggest announcement of the stream was the unveiling of GPT-4o (the 'o' standing for 'omni'), which combines audio, visual, and text processing in real time. Eventually, this version of OpenAI's GPT technology will be made available to all users for free, with usage limits.

For now, though, it's being rolled out to ChatGPT Plus users, who will get up to five times the messaging limits of free users. Team and Enterprise users will also get higher limits and access to it sooner. 

GPT-4o will have GPT-4's intelligence, but it'll be faster and more responsive in daily use. Plus, you'll be able to provide it with or ask it to generate any combination of text, image, and audio.

The stream saw Mira Murati, Chief Technology Officer at OpenAI, and two researchers, Mark Chen and Barret Zoph, demonstrate GPT-4o's real-time responsiveness in conversation while using its voice functionality. 

The demo began with a conversation about Chan's mental state, with GPT-4o listening and responding to his breathing. It then told a bedtime story to Barret with increasing levels of dramatics in its voice upon request – it was even asked to talk like a robot.

It continued with a demonstration of Barret “showing” GPT-4o a mathematical problem and the model guiding Barret through solving it by providing hints and encouragement. Chan asked why this specific mathematical concept was useful, which it answered at length. 

A look at the updated mobile app interface for ChatGPT.

A look at the updated mobile app interface for ChatGPT. (Image credit: OpenAI)

They followed this up by showing GPT-4o some code, which it explained in plain English, and provided feedback on the plot that the code generated. The model talked about notable events, the labels of the axis, and a range of inputs. This was to show OpenAI's continued conviction to improving GPT models' interaction with code bases and the improvement of its mathematical abilities.

The penultimate demonstration was an impressive display of GPT-4o's linguistic abilities, as it simultaneously translated two languages – English and Italian – out loud. 

Lastly, OpenAI provided a brief demo of GPT-4o's ability to identify emotions from a selfie sent by Barret, noting that he looked happy and cheerful.

If the AI model works as demonstrated, you'll be able to speak to it more naturally than many existing generative AI voice models and other digital assistants. You'll be able to interrupt it instead of having a turn-based conversation, and it'll continue to process and respond – similar to how we speak to each other naturally. Also, the lag between query and response, previously about two to three seconds, has been dramatically reduced. 

ChatGPT equipped with GPT-4o will roll out over the coming weeks, free to try. This comes a few weeks after Open AI made ChatGPT available to try without signing up for an account. 

2. Free users will have access to the GPT store, the memory function, the browse function, and advanced data analysis

OpenAI unveils the GPT Store

OpenAI unveils the GPT Store at its Spring Update event. (Image credit: Open AI)

GPTs are custom chatbots created by OpenAI and ChatGPT Plus users to help enable more specific conversations and tasks. Now, many more users can access them in the GPT Store.

Additionally, free users will be able to use ChatGPT's memory functionality, which makes it a more useful and helpful tool by giving it a sense of continuity. Also being added to the no-cost plan are ChatGPT's vision capabilities, which let you converse with the bot about uploaded items like images and documents. The browse function allows you to search through previous conversations more easily.

ChatGPT's abilities have improved in quality and speed in 50 languages, supporting OpenAI’s aim to bring its powers to as many people as possible. 

3. GPT-4o will be available in API for developers

OpenAI GPT-4o API

3. GPT-4o will be available in API for developers (Image credit: OpenAI)

OpenAI's latest model will be available for developers to incorporate into their AI apps as a text and vision model. The support for GPT-4o's video and audio abilities will be launched soon and offered to a small group of trusted partners in the API.

4. The new ChatGPT desktop app 

A look at the new ChatGPT desktop app running on a Mac.

A look at the new ChatGPT desktop app running on a Mac. (Image credit: OpenAI)

OpenAI is releasing a desktop app for macOS to advance its mission to make its products as easy and frictionless as possible, wherever you are and whichever model you're using, including the new GPT-4o. You’ll be able to assign keyboard shortcuts to do processes even more quickly. 

According to OpenAI, the desktop app is available to ChatGPT Plus users now and will be available to more users in the coming weeks. It sports a similar design to the updated interface in the mobile app as well.

5. A refreshed ChatGPT user interface

ChatGPT is getting a more natural and intuitive user interface, refreshed to make interaction with the model easier and less jarring. OpenAI wants to get to the point where people barely focus on the AI and for you to feel like ChatGPT is friendlier. This means a new home screen, message layout, and other changes. 

6. OpenAI's not done yet

Open AI

(Image credit: Open AI)

The mission is bold, with OpenAI looking to demystify technology while creating some of the most complex technology that most people can access. Murati wrapped up by stating that we will soon be updated on what OpenAI is preparing to show us next and thanking Nvidia for providing the most advanced GPUs to make the demonstration possible. 

OpenAI is determined to shape our interaction with devices, closely studying how humans interact with each other and trying to apply its learnings to its products. The latency of processing all of the different nuances of interaction is part of what dictates how we behave with products like ChatGPT, and OpenAI has been working hard to reduce this. As Murati puts it, its capabilities will continue to evolve, and it’ll get even better at helping you with exactly what you’re doing or asking about at exactly the right moment. 

You Might Also Like

TechRadar – All the latest technology news

Read More

Windows 11’s Copilot AI just took its first step towards being an indispensable assistant for Android – but Google Gemini hasn’t got anything to worry about yet

Microsoft’s Copilot AI could soon help Windows 11 users deal with texting on their Android smartphone (and much more besides in the future).

Windows Latest noticed that there’s a new plug-in for Copilot (the recently introduced add-ons that bring extra functionality to the AI assistant), which is reportedly rolling out to more people this week. It’s called the ‘Phone’ plug-in – which is succinct and very much to the point.

As you might guess, the plug-in works by leveraging the Phone Link app that connects your mobile to your Windows 11 PC and offers all sorts of nifty features therein.

So, you need to have Phone Link app up and running before you can install the Copilot Phone plug-in. Once that’s done, Windows Latest explains that the abilities you’ll gain include being able to use Copilot to read and send text messages on your Android device (via the PC, of course), or look up contact information.

Right now, the plug-in doesn’t work properly, mind you, but doubtless Microsoft will be ironing out any problems. When Windows Latest tried to initiate a phone call, the plug-in didn’t facilitate this, but did provide the correct contact info, so they could dial themselves.

The fact that this functionality is very basic looking right now means Google will hardly be losing any sleep – and moreover, this isn’t a direct rival for the Gemini AI app anyway, as it works to facilitate managing your Android device on your PC desktop.

Expect far greater powers to come in the future

Microsoft has previously teased the kind of powers Copilot will eventually have when it comes to hooking up your Windows 11 PC and Android phone together. For example, the AI will be able to sift through texts on your phone and extract relevant information (like the time of a dinner reservation, if you’ve made arrangements via text).

Eventually, this plug-in could be really handy, but right now, it’s still in a very early working state as noted.

While it’s for Android only for the time being, the Phone plug-in for Copilot should be coming to iOS as well, as Microsoft caters for iPhones with Phone Link (albeit in a more limited fashion). Still, this isn’t confirmed, but we can’t imagine Microsoft will leave iPhone owners completely out in the cold when it comes to AI features such as this.

You might also like…

TechRadar – All the latest technology news

Read More

Logic Pro 2 is a reminder that Apple’s AI ambitions aren’t just about chatbots

While the focus of Apple’s May 7 special event was mostly hardware — four new iPads, a new Apple Pencil, and a new Magic Keyboard — there were mentions of AI with the M2 and M4 chips as well as new versions of Final Cut Pro and Logic Pro for the tablets. 

The latter is all about new AI-infused or powered features that let you create a drum beat or a piano riff or even add a warmer, more distorted feel to a recorded element. Even neater, Logic Pro for iPad 2 can now take a single recording and split it into individual tracks based on the instruments in a matter of seconds. 

It’s a look behind the curtain at the kind of AI features Apple sees the biggest appeal and affordance with. Notably, unlike some rollouts from Google or OpenAI, it’s not a chatbot or an image generator. With Logic Pro, you're getting features that can be genuinely helpful and further expand what you can do within an app.

A trio of AI-powered additions for Logic Pro for iPad

Stem Splitter in Logic Pro for iPad 2.

Stem Splitter can separate a single track into four individual ones split up by instrument.  (Image credit: Apple)

Arguably the most helpful feature for musicians will be Stem Splitter, which aims to solve the problem of separating out elements within a given track. Say you’re working through a track or giving an impromptu performance at a cafe; you might just hit record in Voice Memos on an iPhone or using a single microphone.

The result is one track that contains all the instruments mixed. Logic Pro 2 can now import that track, analyze it, and split it into four tracks: vocals, drums, bass and other instruments. It won’t change the sound but essentially puts each element on a separate track, allowing you to easily modify or edit it. You can even place plugins, something that Logic is known for, on iPad and the Mac.

The iPad Pro with M4 will likely be mighty speedy when tackling this thanks to its 16-core neural processing unit, but it will work on any iPad with Apple Silicon through a mixture of on-device AI and deep learning. For musicians big or small, it’s poised to be a simple, intuitive way to convert voice memos into workable and mixable tracks.

AI-powered instruments to complete a track

Bass Session Player in Logic Pro for iPad 2

A look at the Bass Session Player within Logic Pro for iPad 2. (Image credit: Apple)

Building on Stem Splitter is a big expansion with Session Players. Logic Pro has long offered Dummer — both on Mac and iPad — as a way to easily add drums to a track via a virtual player that can be customized by style and even complexity. Logic Pro for iPad 2 adds a piano and bass player to the mix, which are extremely adjustable session players for any given track. With piano, in particular, you can customize the individual left or right hand’s playing style, pick between four types of piano, and use a plethora of other sliding tools. It's even smart enough to recognize where on a track it is, be it a chorus or a bridge. It only took a few seconds to come up with a decent-sized track as well on an iPad Pro.

If you’re only a singer or desperately need a bass line for your track, Logic Pro for iPad 2 aims to solve this with an output that plays with and complements any existing track.

Rounding out this AI expansion for Logic Pro on the iPad is a Chromaglow effect, which takes a common, expensive piece of hardware reserved for studios and places it on the iPad to add a bit more space, color, and even warmth to the track. Like other Logic plugins, you can pick between a few presets and further adjust them.

Interestingly enough, alongside these updates, Apple didn’t show off any new Apple Pencil integrations for Logic Pro for iPad 2. I’d have to imagine that we might see a customized experience with the palette tool at some point.

It’s clear that Apple’s approach to AI, like its other software, services, and hardware, is centered around crafting a meaningful experience for whoever uses it. In this case, for musicians, it’s solving pain points and opening doors for creativity further.

Stem Splitter, new session players, and Chromaglow feel right at home within Logic Pro, and I expect to see similar enhancements to other Apple apps announced at WWDC. Just imagine an easier way to edit photos or videos baked into the Photos app or a way to streamline or condense a presentation within Keynote.

Pricing and Availability

All of these features are bundled in with Logic Pro for iPad 2, which is set to roll out and launch on May 13, 2024. If you’re already subscribed at $ 4.99 a month or $ 49 for the year, you’ll get the update for free, and there is no price increase if you’re new to the app. Additionally, you can get a one-month free trial of first-time Logic Pro for iPad users.

You might also like

TechRadar – All the latest technology news

Read More