macOS Sequoia could soon spark your pawn addiction with its rebooted Chess game

Mac fans are undoubtedly eagerly awaiting the release of macOS Sequoia, and looking forward to exploring its new features, including Apple Intelligence, iPhone Mirroring, and a whole load of other goodies, but an unsung upgrade has also been spotted – an update to Chess for macOS. 

According to a report from 9to5Mac, the stock Chess game within the developer build of macOS Sequoia has received quite a facelift, with new modern graphics, a fresh background, and more realistic textures for the pieces. 

You can also change the style of the pieces and choose between wood, marble, or metal if you fancy spicing things up further.

You may be wondering why it’s such a big deal to have updated visuals for Chess, but you might be surprised to learn that the game hasn’t been updated since 2012. So, it’s been a long wait, but at least Apple has finally turned its attention to the app, and made improvements for those of us who enjoy a cheeky chess game or two between emails on a slow workday. 

All just a waiting game

While the Chess upgrade and other nuggets have been spotted in the developer beta of macOS 15, it's worth noting that if Apple hasn’t explicitly mentioned a particular feature at WWDC 2024 or in other related announcements, there's no guarantee that it'll show up in Sequoia when it launches later this year. 

That’s not to say we don’t expect to see the nice new chessboard and pieces – we just have to bear in mind that anything that’s spotted up until the public release of macOS Sequoia has to be taken with a pinch of salt, and it could even be the case that even more improvements to chess are rolled out.

It’s definitely an exciting time to be a Mac fan, what with refreshed MacBook Airs still cooling on the shelves, and a whole new AI-powered macOS operating system in the pipeline – and for chess fans, this news might be the icing on the cake. 

You might also like…

TechRadar – All the latest technology news

Read More

YouTube could soon make it impossible to use ad blockers on its videos – here’s how

YouTube’s crusade against ad blockers has seen the platform try out multiple strategies, from auto-skipping entire videos to crippling third-party apps. Now they're trying something new, though. 

The company is now experimenting with what could be its most insidious tactic yet – server-side ad injection. This news comes from the developer behind SponsorBlock, a prominent ad blocker for YouTube, who sounded the alarm on X (the platform formerly known as Twitter).

Server-side ad injection (also called server-side ad insertion) is where websites directly integrate advertisements into video content on the server, hence the name. YouTube's current method is more akin to client-side ad insertion, or CSAI, which places advertisements on videos while on web browsers. 

Ad blockers operate by stopping CSAI ads, but they don’t work against SSAI (server-side ad injection) techniques. That’s because, under SSAI, advertisements are considered to be “indistinguishable from the video,” according to 9To5Google.

If YouTube decides to implement SSAI on a wide scale, it would essentially break ad blockers as they’d be unable to stop commercials. A small group of users on the YouTube subreddit have reported encountering the tech, with one of the top comments noting they’re seeing ads even though they use uBlock Origin on Firefox. Nothing they do to fix the problems seems to work. 

Possible workaround

Despite all the doom and gloom surrounding the situation, hope is not lost. The SponsorBlock developer made an FAQ addressing SSAI on GitHub, explaining this is not the end of the extension. 

They state that if YouTube decides to implement the injection, it would have to send data to the video player informing it how long an advertisement will last. It’s possible for ad blockers to obtain the data and utilize it to stop the commercial. 

But, giving an ad blocker the ability to do so will be difficult. It may be a while until these extensions can successfully stop SSAI. The developer states that “SponsorBlock will not work for people” while the experiment is underway.

New restrictions

In addition to SSAI, a group of developers found a potentially new restriction on YouTube, where the platform will tell you to log into your account before you can watch content. 

The website apparently wants to make sure “you’re not a bot.” Android Authority, in its report, believes YouTube might soon “limit logged-out video access in the future.” If this is ever introduced, it would severely limit how YouTube videos are shared. 

See more

Software developers are, however, a wily bunch. The team behind content downloader Cobalt has found a way to circumvent the restriction. But YouTube could roll out stronger limitations on content sharing and an even stronger crackdown on ad blockers.

Be sure to check out TechRadar's list of the best free YouTube download app for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Android’s Find My Device trackers are missing one big AirTags feature, but that could soon change

Google's upgraded Find My Device network is slowly rolling out globally to help Android fans find their lost belongings. And it seems that Google is already planning to add a key feature that the network lacks compared to Apple's AirTags – support for UWB (ultra-wideband) tech.

UWB is one of the main technologies that powers Apple AirTags' Precision Finding feature (below). That feature gives you directions, down to a few feet, to where your lost keys are. But Google's Find My Device network doesn't currently support the tech – even though many of the best Android phones now support ultra-wideband. 

While that oversight means that the first wave of Find My Device trackers lacks the feature, Google appears to have plans to fill the gap. As spotted by Android Authority, some code references in the latest version of the Find My Device app suggest that Google is working on adding UWB to its new network.

That doesn't necessarily mean that Google is planning to bring the feature to Find My Device soon, but it is a promising sign. And it might not be the only new feature in the pipeline for the network – another code reference points to AR (augmented reality) features via the ARCore software development kit (SDK).

In theory, that could tie in nicely with the UWB support, with a camera UI visually showing you how to track down your lost valuables. That would be a very Google integration with echoes of Google Lens, but for now, its Find My Device network lags behind its Apple rival in one small but useful area.

A nudge in the right direction

An iPhone showing the Precision Finding feature of Apple AirTags

(Image credit: Apple)

The lack of UWB support on Google's Find My Device network certainly isn't a deal-breaker for the early trackers that are available now from the likes of Chipolo and Pebblebee.

Like Apple's Find My network, Google's new network anonymously leverages millions of phones worldwide to help you locate lost items. You can attach the trackers, which come in tag and card form, to valuables and tap to 'play sounds' in the app to trigger a sound or get the tracker to emit an LED flash.

Both things help compensate for the lack of a visual Precision Finding feature like the one you get with AirTags. But those visual cues can still be very handy if you can't quite tell where the sound is coming from, and Apple's integration also gives you increasingly powerful vibrations alongside the UWB-powered directions.

Then again, UWB is only really helpful at very short range, so it only really becomes a benefit when you're in the same room as your lost item. So while it's certainly a nice-to-have that will hopefully come to the Find My Device, Google's rebooted network and the new trackers that support it are still a big upgrade from what was available before on Android.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Live Caption may soon become more emotionally expressive on Android

Google is reportedly working on implementing several customization features to the Live Caption accessibility feature on mobile devices. Evidence of this update was discovered by software deep diver Assemble Debug after digging through the Android System Intelligence app. According to an image given to Android Authority, there will be four options in total. We don't know much, but there is a little bit of explanation to be found. 

The first one allows Android phones to display “emoji icons” in a caption transcript; perhaps to better convey what emotions the voices expressing. The other three aren't as clear. The second feature will “emphasize emotional intensity in [the] transcription” while the third is said to include the “word duration [effects]” and the ability to display “emotional tags.”

Feature breakdown

As you can see, the wording is pretty vague, but there’s enough to paint a picture. It seems Live Caption will become better at replicating emotions in voices it transcribes. Say, for example, you’re watching a movie and someone is angrily screaming. Live Caption could perhaps show text in all caps to signify yelling. 

The feature could also slant words in a line to indicate whenever someone is being sarcastic or trying to imply something. Word duration effect could refer to the software showing drawn out letters in a set of captions. Maybe someone is singing in and they begin to hold a note. The sound that’s being held could be displayed thanks to this toggle. 

Emotional tags is admittedly more difficult to envision. Android Authority mentions the tags will be shown and included in a transcript. This could mean that the tool is going to add clear indicators within transcriptions of what a subject is expressing at the moment. Users might see the word “Angry” pop up whenever a person is feeling angry about something or “Sad” whenever someone is crying.

Greater utility

That’s our best guess. If these rumored features do operate as described, it would give Live Caption even greater utility than what it already has. The tool was introduced back in 2019 as an accessibility tool to help people enjoy content if they’re hard of hearing or can’t turn on the sound for whatever reason.

The current captions are rather plain, but with update, emotions could be added to Google’s tool for a better immersive experience.  

Android Authority claims the the features were found in a “variant of the Android System Intelligence app”. We believe this means that they were located inside a special version of the app meant for first-party hardware like the Google Pixel. So the customization tools may be exclusive to the Pixel 8 or a future model. It’s too early to tell at the moment. Hopefully, the upgraded Live Captions sees a much wider release.

Until we learn more, check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s ChatGPT might soon be thanking you for gold and asking if the narwhal bacons at midnight like a cringey Redditor after the two companies reach a deal

If you've ever posted a comment or post on Reddit, there's a chance that it will be used as material for training OpenAI's AI models after the two companies confirmed that they've reached a deal that enables this exchange. 

Reddit will be given access to OpenAI's technology to build AI features, and for that (as well as an undisclosed monetary amount), it's giving OpenAI access to Reddit posts in real-time that can be used by tools like ChatGPT to formulate more human-like responses. 

OpenAI will be able to access real-time information from Reddit's data API, software that enables the retrieval of and interaction with information from Reddit's platform, providing OpenAI with structured and unique content from Reddit. This is similar to an agreement Reddit reached with Google at the beginning of the year, allowing Google to train its own AI models on Reddit's data, reported to be worth $ 60 million. 

According to the official Reddit blog post publicizing the deal, the deal will help people discover and engage with Reddit's communities thanks to the Reddit content brought to ChatGPT and other new OpenAI products. Through Reddit's APIs, OpenAI's tools will be able to understand and showcase Reddit's content better, particularly when it comes to recent topics. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

Reddit, the company, and Reddit, the community of users

Users and moderators on Reddit will apparently be offered new features thanks to applications powered by OpenAI's large language models (LLMs). OpenAI will also start advertising on Reddit as an ad partner. 

The blog post put out by Reddit also claims that the deal is in the spirit of keeping the internet open, as well as fostering learning and research to keep it that way. It also cites that it wants to continue to build up its community, recognizing its uniqueness and how Reddit serves as a place for conversation online. Reddit claims that this deal was signed to improve everyone's Reddit experience using AI.

It remains to be seen whether users are convinced of these benefits, but previous changes of this type and scale haven't gone down particularly well. In June 2023, over 7,000 subreddit communities went dark to protest changes to Reddit's API pricing for developers

It also hasn't explicitly been stated by either company that Reddit data will be used to train OpenAI's models, but I think many people assume this will be the case – or that it’s already happening. In contrast, it was disclosed that Reddit would give Google “more efficient ways to train models,” and then there's the fact that OpenAI founder Sam Altman is himself a Reddit shareholder. This doesn't confirm anything specific and, as reported by The Verge, “This partnership was led by OpenAI’s COO and approved by its independent Board of Directors.”

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

(Image credit: JASON REDMOND/AFP via Getty Images)

Official statements expressing the benefits of the partnership

Speaking about the partnership and as quoted in the blog post, representatives from both companies said: 

“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up to date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they’re looking for, and helps new audiences find community on Reddit.”

– Steve Huffman, Reddit Co-Founder and CEO

“We are thrilled to partner with Reddit to enhance ChatGPT with uniquely timely and relevant information, and to explore the possibilities to enrich the Reddit experience with AI-powered features.”

– Brad Lightcap, OpenAI COO

They're not wrong, and many people make search queries appended with the word “Reddit” as Reddit threads will often provide information directly relevant to what you're searching for. 

It's an interesting development, and OpenAI's sourcing of information – both in terms of accuracy and concerning training data – has been the main topic of discussion around the ethics of its practices for some time. I suppose at least this way, Reddit users are being made aware that their information can be used by OpenAI – even if they don’t really have a choice in the matter. 

The announcement blog post reassures users that Reddit believes that “privacy is a right,” and that it has published a Public Content Policy that gives more detail about Reddit's approach to accessing public content and user protections. We'll have to see if this will be upheld as time goes on, and what the partnership looks like in practice, but I hope both companies will take users' concerns seriously. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

OpenAI’s big launch event kicks off soon – so what can we expect to see? If this rumor is right, a powerful next-gen AI model

Rumors that OpenAI has been working on something major have been ramping up over the last few weeks, and CEO Sam Altman himself has taken to X (formerly Twitter) to confirm that it won’t be GPT-5 (the next iteration of its breakthrough series of large language models) or a search engine to rival Google. What a new report, the latest in this saga, suggests is that OpenAI might be about to debut a more advanced AI model with built-in audio and visual processing.

OpenAI is towards the front of the AI race, striving to be the first to realize a software tool that comes as close as possible to communicating in a similar way to humans, being able to talk to us using sound as well as text, and also capable of recognizing images and objects. 

The report detailing this purported new model comes from The Information, which spoke to two anonymous sources who have apparently been shown some of these new capabilities. They claim that the incoming model has better logical reasoning than those currently available to the public, being able to convert text to speech. None of this is new for OpenAI as such, but what is new is all this functionality being unified in the rumored multimodal model. 

A multimodal model is one that can understand and generate information across multiple modalities, such as text, images, audio, and video. GPT-4 is also a multimodal model that can process and produce text and images, and this new model would theoretically add audio to its list of capabilities, as well as a better understanding of images and faster processing times.

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

The bigger picture that OpenAI has in mind

The Information describes Altman’s vision for OpenAI’s products in the future as involving the development of a highly responsive AI that performs like the fictional AI in the film “Her.” Altman envisions digital AI assistants with visual and audio abilities capable of achieving things that aren’t possible yet, and with the kind of responsiveness that would enable such assistants to serve as tutors for students, for example. Or the ultimate navigational and travel assistant that can give people the most relevant and helpful information about their surroundings or current situation in an instant.

The tech could also be used to enhance existing voice assistants like Apple’s Siri, and usher in better AI-powered customer service agents capable of detecting when a person they’re talking to is being sarcastic, for example.

According to those who have experience with the new model, OpenAI will make it available to paying subscribers, although it’s not known exactly when. Apparently, OpenAI has plans to incorporate the new features into the free version of its chatbot, ChatGPT, eventually. 

OpenAI is also reportedly working on making the new model cheaper to run than its most advanced model available now, GPT-4 Turbo. The new model is said to outperform GPT-4 Turbo when it comes to answering many types of queries, but apparently it’s still prone to hallucinations,  a common problem with models such as these.

The company is holding an event today at 10am PT / 1pm ET / 6pm BST (or 3am AEST on Tuesday, May 14, in Australia), where OpenAI could preview this advanced model. If this happens, it would put a lot of pressure on one of OpenAI’s biggest competitors, Google.

Google is holding its own annual developer conference, I/O 2024, on May 14, and a major announcement like this could steal a lot of thunder from whatever Google has to reveal, especially when it comes to Google’s AI endeavor, Gemini

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Android users may soon have an easier, faster way to magnify on-screen elements

As we inch closer to the launch of Android 15, more of its potential features keep getting unearthed. Industry insider Mishaal Rahman found evidence of a new camera extension called Eyes Free to help stabilize videos shot by third-party apps. 

Before that, Rahman discovered another feature within the Android 15 Beta 1.2 update relating to a fourth screen magnification shortcut referred to as the “Two-finger double-tap screen” within the menu.

What it does is perfectly summed up by its name: quickly double-tapping the screen with two fingers lets you zoom in on a specific part of the display. That’s it. This may not seem like a big deal initially, but it is. 

As Rahman explains, the current three magnification shortcuts are pretty wonky. The first method requires you to hold down on an on-screen button, which is convenient but causes your finger to obscure the view and only zoom into the center. The second method has you hold on both the volume buttons, which frees up the screen but takes a while to activate. 

The third method is arguably the best one—tapping the phone display three times lets you zoom into a specific area. However, doing so causes the Android device to slow down, so it's not instantaneous. Interestingly enough, the triple-tap method warns people of the performance drop. 

This warning is missing on the double-tap option, indicating the zoom is near instantaneous. Putting everything together, you can think of double-tap as the Goldilocks option. Users can control where they want the software to focus on without experiencing any slowdown.

Improved accessibility

At least, it should be that fast and a marked improvement over the triple tap. Rahman states in his group’s time testing the feature, they noticed a delay when zooming in. He chalks this up to the unfinished state of the update, although soon after admits that the slowdown could simply be a part of the tool and may be an unavoidable aspect of the software.

It’ll probably be a while until a more stable version of the double-tap method becomes widely available. If you recall, Rahman and his team could only view the update by manually toggling the option themselves. As far as we know, it doesn’t even work at the moment.

Double-tap seems to be one of the new accessibility features coming to Android 15. There are several in the works, such as the ability to hide “unused notification channels” to help people manage alerts and forcing dark mode on apps that normally don’t support it.

While we have you, be sure to check out TechRadar's round up of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Your iPhone may soon be able to transcribe recordings and even summarize notes

It’s no secret that Apple is working on generative AI. No one knows what it’ll all entail, but a new leak from AppleInsider offers some insight. The publication recently spoke to “people familiar with the matter,” claiming Apple is working on an “AI-powered summarization [tool] and greatly enhanced audio transcription” for multiple operating systems. 

The report states these should bring “significant improvements” to staple iOS apps like Notes and Voice Memos. The latter is slated to “be among the first to receive upgraded capabilities,” namely the aforementioned transcriptions. They’ll take up a big portion of the app’s interface, replacing the graphical representation for audio recordings. AppleInsider states it functions similarly to Live VoiceMail on iPhone, with a speech bubble triggering the transcription and the text appearing right on the screen.

New summarizing tools

Alongside VoiceMemos, the Notes app will apparently also get some substantial upgrades. It’ll gain the ability to record audio and provide a transcription for them, just like Voice Memo. Unique to Notes though, is the summarization tool, which will provide “a basic text summary” of all the important points in a given note.

Safari and Messages will also receive their own summarization features, although they’ll function differently. The browser will get a tool that creates short breakdowns for web pages, while in Messages, the AI provides a recap of all your texts. It’s unknown if the Safari update will be exclusive to iPhone or if the macOS version will get the same capability, but there’s a chance it could.

Apple, at the time of this writing, is reportedly testing these features for an upcoming release on iOS 18 later this year. According to the report, there are plans to update the corresponding apps with the launch of macOS 15 and iPadOS 18; both of which are expected to come out in 2024.

Extra power needed

It’s important to mention that there are conflicting reports on how these AI models will work. AppleInsider claims certain tools will run “entirely on-device” to protect user privacy. However, a Bloomberg report says some of the AI features on iOS 18 will instead be powered by a cloud server equipped with Apple's M2 Ultra chip, the same hardware found on 2023’s Mac Studio.

The reason for the cloud support is that “complicated jobs” like summarizing articles require extra computing power. iPhones by themselves, may not have the ability to run everything internally.

Regardless of how the company implements its software, it could help Apple catch up to its AI rivals. Samsung’s Galaxy S24 has many of these AI features already. Plus, Microsoft’s OneNote app can summarize information thanks to Copilot. Of course, take all these details with a grain of salt. Apple could always change things at the last minute.

Be sure to check out TechRadar's list of the best iPhones for 2024 to see which ones “reign supreme”.

You might also like

TechRadar – All the latest technology news

Read More

Google Photos will soon fix your average videos with a single ‘enhance’ tap

While there are plenty of video editing tools built into smartphones, it can take some skill to pull off an edit that's pleasing to the eye. But Google Photos looks set to change that.

By digging into an upcoming version of the Photos app, Android Authority contributor and code-diver Assemble Debug found a feature called “Enhance your videos”, and with a bit of work, got it up and running. As one would guess from the name, the feature is used to enhance videos accessed via the Photos app in a single tap.

Enhance your videos can automatically adjust brightness, contrast, color saturation and other visual parameters for a selected video in order to deliver an edited version that should look better than the original, at least in the eyes of Google.

While this feature isn’t official yet, it may be somewhat familiar to Google Photos users, as there’s already an option to enhance photos in the web and mobile versions of the service. In my experience, the enhance option works rather well, though it’s far from perfect and can overbake its enhancements.

But it makes sense for Google to extend this enhancement function to videos, especially in the TikTok era; do go and check out the TechRadar TikTok for news, views and reactions to the latest tech.

One neat thing about Enhance your video, according to Android Authority, is that all the processing happens on-device, thereby bypassing the need for an internet connection and cloud-based processing. Whether this will work on older phones without AI-centric chipsets remains to be seen.

Given that Assemble Debug got the Enhance your video feature up and running, it looks like it could be nearing an official rollout. We can expect to hear more about this and other upcoming Google features, as well as Android 15, at Google I/O 2024, which is set to kick off on May 14.

You might also like

TechRadar – All the latest technology news

Read More

These are the Meta Quest alternatives we could get soon with Horizon OS, according to Mark Zuckerberg

Meta is making its Horizon OS – the operating system its Quest headsets run on – available to third-party XR devices (XR is a catchall for virtual, augmented, and mixed reality), and it might be the biggest VR announcement anyone makes this decade.

The first batch will include headsets from Asus, Lenovo, and Xbox, and while we have an idea what these gadgets might offer, Meta CEO Mark Zuckerberg may have just provided us with a few more details, or outlined other non-Quest hardware we might see running Horizon OS in the future.

To get you up to speed, the three devices that were teased in the Horizon OS-sharing announcement are a “performance gaming headset” from Asus, “mixed reality devices for productivity” from Lenovo, and a more Quest-like headset from Xbox.

And in Meta’s Q1 2024 earnings call, Zuckerberg discussed the recent announcement by explaining the sorts of diverse XR hardware we might see by providing some pretty specific examples.

One was a “work-focused headset” that’s “lighter” and “less designed for motion;” you plug it into a laptop to use it, and this could be something we see from laptop-manufacturer Lenovo’s device. Another Zuckerberg description was for a “gaming-focused headset” that prioritizes “peripherals and haptics,” which could be the Asus headset.

Then there was a device that would be packaged with “Xbox controllers and a Game Pass subscription out of the box” – with Zuckerberg specifically connecting it to the announced Xbox device.

More Horizon OS headsets incoming?

The Meta Quest 3 being used while someone boxes in a home gym

A fitness-focused VR headset could be coming (Image credit: Meta)

He also detailed two devices which haven’t yet been teased: “An entertainment-focused headset” designed with the “highest-resolution displays” at the expense of its other specs, and “a fitness-focused headset” that’s “lighter with sweat-wicking materials.”

It’s possible that these suggestions were simply Zuckerberg riffing on potential Horizon OS devices rather than gadgets that are currently in the works. But we wouldn’t be surprised to hear that these gadgets are on their way given how plausible they sound – with it being that the partners aren’t yet ready to reveal they're working on something.

There’s also the question of other already-announced XR devices like the Samsung XR headset and if they’ll run on Horizon OS. Given Samsung has partnered with Google on its upcoming headset – which we assume is for software support – we posit that won't be the case. But we’ll have to wait and see what’s announced when the device is revealed (hopefully later this year).

All we can say is that Meta’s approach already looks to be paving the way for more diverse hardware than we’ve previously seen before in VR, finally giving us some fantastic options that aren’t just the Meta Quest 3 or whatever comes next to replace it.

Speaking of which, Zuckerberg added in the investor call that he thinks that Meta’s “first-party Quest devices will continue to be the most popular headsets as we see today” and that the company plans to keep making its state-of-the-art VR tech accessible to everyone. So if you do want to wait for the next Quest – perhaps the rumored Meta Quest 3 Lite or Meta Quest Pro 2 – then know that it’s likely still on the way.

You might also like

TechRadar – All the latest technology news

Read More