Spotify finally unveils a desktop app miniplayer to end window-juggling, but there’s a catch

Spotify Premium users rejoice – the music platform is finally adding a ‘miniplayer’ for its desktop app to improve users’ experience – a whopping three years after the web app got a similar feature.

The new miniplayer has two different designs and can be activated in the bottom right corner of the full-screen player, prompting the app to shrink into a minimized view showing media controls and song information. As I mentioned above, this isn’t the first appearance of a condensed Spotify player, but it’s the first implementation of the feature for the app's desktop version. 

The new miniplayer will be available for Windows, macOS, and ChromeOS, and looks like the miniplayers of other apps (like Apple Music) and users have been requesting this desktop feature for a long time. Some users have been so desperate for it that they’ve made their own, with a multitude of user-created apps currently available on GitHub. 

Before this addition, there was a small preview box with media controls that would appear while Spotify was minimized and users hovered their mouse over it, so combined with the existing web app miniplayer this development hasn’t come totally out of the blue. It makes me wonder why it took so long to bring it to the desktop version, considering it already existed in the web version for years. 

Man at a computer jamming out to music

(Image credit: Shutterstock/Andrey_Popov)

The benefits of this feature update are pretty obvious – many people have Spotify running in the background while doing other activities on their PCs. Before this update, you’d have to minimize your present activity (or resize your open windows) and switch to the Spotify tab, even if you wanted to do something simple like skip to the next track or episode or adjust the in-app volume – unless you have dedicated media controls on your keyboard, that is. This was outlined in a community post on Spotify's official website, with the aim being to give users better control of the player without having to interrupt their activities. 

Once users open the Spotify miniplayer, it appears as an “always on top” floating window that stays visible in front of all other opened windows on your desktop, and operates independently of whatever you’re doing in the main Spotify window. The miniplayer will also be able to play any media you can play in the main app, including music, short videos, and podcasts.

The bad news is that the feature is currently only available to Spotify Premium users, so you'll need to shell out for a subscription if you want to use it. It’ll be interesting to see if Spotify makes it available to all users in the future.

A promo shot of Spotify's new DJ feature.

AI DJ was fun, but this is a far more practical feature. (Image credit: Spotify)

To use the feature, open your Spotify desktop app, and start playing some content. Then, click the miniplayer icon: the small white square that’s in a larger white outlined square. This should open the miniplayer, and if you’re unable to see either the icon or your miniplayer doesn’t pop up when you click it, try reinstalling the Spotify app. 

Spotify is just catching up with Apple Music after a decade by giving this feature to desktop users. What makes it a little more puzzling is that there have been Spotify miniplayer features in other versions of the app (such as a Google Maps integration on Android phones) for a while now, so Spotify had already worked it out to some degree at least.

Maybe Spotify thought the demand for a widget-like feature simply wasn’t there, but how many third-party apps there are and how many users have been asking for such a feature paints a confusing picture. In any case, I’m glad it got around to giving users exactly what they’ve been asking for, and hopefully, it carries on putting in features that users explicitly tell Spotify they want to see. Fun novelty features are impressive and entertaining, like AI DJ and Spotify Wrapped, but at the end of the day, users appreciate products that work well. 

Via Digital Music News.

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT takes the mic as OpenAI unveils the Read Aloud feature for your listening pleasure

OpenAI looks like it’s been hard at work, making moves like continuing to improve the GPT store and recently sharing demonstrations of one of the other highly sophisticated models in its pipeline, the video-generation tool Sora. That said, it looks like it’s not completely resting on ChatGPT’s previous success and giving the impressive AI chatbot the capability to read its responses out loud. The feature is being rolled out on both the web version and the mobile versions of the chatbot. 

The new feature will be called 'Read Aloud', as per an official X (formerly Twitter) post from the generative artificial intelligence (AI) company. These will come in useful for many users, including those who have different accessibility needs and people using the chatbot while on the go.

Users can try it for themselves now, according to the Verge, either on the web version of ChatGPT or a mobile version (iOS and Android), and they will be given five different voices they can select from that ChatGPT can use. The feature is available to try whether you use the free version available to all users, GPT-3.5, or the premium paid version, GPT-4. When it comes to languages, users can expect to be able to use the Read Aloud feature in 37 languages (for now) and ChatGPT will be given the ability to autodetect the language that the conversation is happening in. 

If you want to try it on the desktop version of ChatGPT, there should be a speaker icon that shows up below the generated text that activates the feature. If you'd like to try it on a mobile app version, users can tap on and hold the text to open the Read Aloud feature player. In the player, users can play, pause, and rewind the reading of ChatGPTs’ response. Bear in mind that the feature is still being rolled out, so not every user in every region will have access just yet.

A step in the right direction for ChatGPT

This isn’t the first voice-related feature that ChatGPT has received, with Open AI introducing a voice chat feature in September 2023, which allowed users to make inquiries using voice input instead of typing. Users can keep this setting on, prompting ChatGPT to always respond out loud to their inputs.

The debut of this feature comes at an interesting time, as Anthropic recently introduced similar features to its own generative AI models, including Claude. Anthropic is an OpenAI competitor that’s recently seen major amounts of investment from Amazon. 

Overall, this new feature is great news in my eyes (or ears), primarily for expanding accessibility to ChatGPT, but also because I've had a Read-Aloud plugin for ChatGPT in my browser for a while now. I find it interesting to listen to and analyze ChatGPT’s responses out loud, especially as I’m researching and writing. After all, its responses are designed to be as human-like as possible, and a big part of how we process actual real-life human communication is by speaking and listening to each other. 

Giving Chat-GPT a capability like this can help users think about how well ChatGPT is responding, as it makes use of another one of our primary ways of receiving verbal information. Beyond the obvious accessibility benefits for blind or partially-sighted users, I think this is a solid move by OpenAI in cementing ChatGPT as the go-to generative AI tool, opening up another avenue for humans to connect to it. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft unveils Turing Bletchley v3: The AI model taking Bing to the next level

Microsoft is working hard towards proving the 'intelligence' part in artificial intelligence, and has just revealed the latest version of its Turing Bletchley series of machine intelligence models, Turing Bletchley v3.

As explained in an official blog post, Turing Bletchley v3 is a multilingual vision-language foundation model, and will be integrated into many existing Microsoft products. If the name of this model sounds scary, don’t worry – let’s break it down. 

The ‘multilingual' part is self-explanatory – the model helps Microsoft products function better in a range of languages, currently standing at more than ninety. The ‘vision-language' part means that the model has image processing and language capabilities simultaneously, which is why this kind of model is known as ‘multimodal’. Finally, the ‘foundation model’ part refers to the conceptual and technical structure of the actual model. 

The first version of this multimodal model was launched in November 2021, and in 2022, Microsoft started testing the latest version – v3. Turing Bletchley v3 is pretty impressive because making a model that can “understand” one type of input (say, text or images) is already a big undertaking. This model combines both text and image processing to, in the case of Bing, improve search results. 

Incorporating neural networks 

The Turing Bletchley v3 model makes use of the concept of neural networks, which is a way of programming a machine that mimics a human brain. These neural networks allow it to make connections in the following manner, as described by Microsoft itself: 

“Given an image and a caption describing the image, some words in the caption are masked. A neural network is then trained to predict the hidden words conditioned on both the image and the text. The task can also be flipped to mask out pixels instead of words.”

The model is trained over and over in this way, not unlike how we learn. The model is also continuously monitored and improved by Microsoft developers. 

Where else the new model is being used

Bing Search isn’t the only product that’s been revamped with Turing Bletchley v3. It’s also being used for content moderation in Microsoft’s Xbox Live game service. The model helps the Xbox moderation team to identify inappropriate and harmful content uploaded by Xbox users to their profiles. 

Content moderation is a massive job scale-wise and often mentally exhausting, so any assistance that helps moderators actually have to see less upsetting content is a big win in my eyes. I can see Turing Bletchley v3 being deployed in content moderation for Bing Search in a similar manner.

This sounds like a significant improvement for Bing Search. The AI-aided heat is on, especially between Microsoft and Google. Recently, Microsoft brought Bing AI to Google Chrome, and now it’s coming for image search. I don’t see how Google doesn’t see this as direct competition in the most direct manner. Google still enjoys the greatest popularity both in terms of browser and search volume, but nothing is set in stone. Your move, Google. 

You might also like …

TechRadar – All the latest technology news

Read More

Google unveils another step in its much-needed privacy boost

Google has announced that its Privacy Sandbox proposal is one step closer to becoming reality as the company is preparing its next stage of trials which will focus on ads relevance and measurement.

For those unfamiliar, the search giant first unveiled its Federated Learning of Cohorts (FLoC) plan to replace third-party browser cookies before announcing Google Topics as part of its Privacy Sandbox initiative as a replacement following backlash on the move. 

TechRadar needs you!

We’re looking at how our readers use VPNs with different devices so we can improve our content and offer better advice. This survey shouldn’t take more than 60 seconds of your time, and entrants from the UK and US will have the chance to enter a draw for a £100 Amazon gift card (or equivalent in USD). Thank you for taking part.

>> Click here to start the survey in a new window <<

As the name suggests, Google Topics splits the web into different topics and divides users into groupings depending on their interests. Meanwhile, FLEDGE is dedicated to facilitating remarketing or showing ads on websites based on a user’s previous browsing history.

Now though, Google is moving ahead with testing its Privacy Sandbox and developers will be able to begin testing the Topics, FLEDGE and Attribution Reporting APIs in Chrome Canary.

Privacy Sandbox testing

Google plans to begin testing Topics and Fledge with a limited number of Chrome Beta users before making API testing available in the stable version of Chrome once things are working smoothly in Beta according to a new blog post.

The company also plans to begin testing its updated Privacy Sandbox settings and controls that will allow users to see and manage the interests associated with them or turn off the trials altogether.

Product director for Privacy Sandbox, Vinay Goel also provided some sample images of the settings the search giant plans to test in his blog post. In the Privacy Sandbox Beta menu, users will be able to toggle the trials on or off as well as customize their choices for Browser-based ad personalization, Ad measurement and Spam & fraud reduction. Here they’ll be able to remove interests from Topics and edit the list of sites that Privacy Sandbox users to infer their interests.

While Chrome users in the US will be opted in to the latest Privacy Sandbox trials, those in the EU will have to opt in by changing the position of the toggle in settings. This is due to GDPR and other data protection laws that apply to Europeans.

We’ll likely hear more from Google once its initial trials are complete and the company expands them to the stable version of Chrome.

Via TechCrunch

TechRadar – All the latest technology news

Read More

Office 365 unveils major email security boost

Microsoft has added a new security layer to its Office 365 email service as it looks to improve the integrity of the messages going in and out. 

The company says its new protection, SMTP MTA Strict Transport Security (MTA-STS), a feature it first announced in H2 2020, will solve problems such as expired TLS certificates, problems with third-party certificates, or unsupported secure protocols.

“We have been validating our implementation and are now pleased to announce support for MTA-STS for all outgoing messages from Exchange Online,” Microsoft said in an announcement. 

Extra protection

In practice, the new security layer means all emails that are sent through Exchange Online will only be delivered through connections that have both authentication and encryption. 

That should render downgrade, and man-in-the-middle attacks impossible, or at least – a lot harder to pull off.

“Downgrade attacks are possible where the STARTTLS response can be deleted, thus rendering the message in cleartext. Man-in-the-middle (MITM) attacks are also possible, whereby the message can be rerouted to an attacker's server,” the announcement added.

“MTA-STS (RFC8461) helps thwart such attacks by providing a mechanism for setting domain policies that specify whether the receiving domain supports TLS and what to do when TLS can't be negotiated, for example stop the transmission.”

Those interested in adopting MTA-STS should refer to this link, where Microsoft explains the process in detail.

The company is already working on further strengthening the security of Office 365 email. DANE for SMTP (DNS-based Authentication of Named Entities), which is said to provide even better protection than MTA-STS, will be rolled out in the coming months. 

“We will deploy support for DANE for SMTP and DNSSEC in two phases. The first phase, DANE and DNSSEC for outbound email (from Exchange Online to external destinations), is slowly being deployed between now and March 2022. We expect the second phase, support for inbound email, to start by the end of 2022,” BleepingComputer cited the Exchange team.

“We've been working on support for both MTA-STS and DANE for SMTP. At the very least, we encourage customers to secure their domains with MTA-STS,” Microsoft added.

“You can use both standards on the same domain at the same time, so customers are free to use both when Exchange Online offers inbound protection using DANE for SMTP by the end of 2022. By supporting both standards, you can account for senders who may support only one method.”

Via: BleepingComputer 

TechRadar – All the latest technology news

Read More