Spotify for Windows 11’s annoying new update shoves one of the app’s most important features to the side

Spotify recently released the new “Jam” feature for its Windows 11 and 10 app, which allows users to listen to the same playlist or album at the same time on different devices. So you and a friend or coworker can enjoy the same tunes while you work, study, or just jam out (hence the name). However, with this new feature, the queue list has been booted to a small space on the right side of the app's UI. 

Please, please change it back. This is the opposite of an improvement.

foryoublue94 via Spotify Forum

This change has proven to be rather unfavorable among Spotify users, who’ve taken to Reddit to voice their complaints. The official blog post that announces the arrival of Jam dubs this change as the “new Queue experience”, explaining that the right sidebar now allows you to browse content in the app and keep an eye on what's currently playing. 

The official post has several disgruntled comments from users dismayed by the change, with one user saying “Why on earth has Queue and Recently Played been moved and is now cramped into the small right-hand column? This is just horrible, and a pain to look at. It makes zero sense from a usability standpoint.

Thanks, I hate it 

You may be thinking something along the lines of what an odd little change for people to be riled up about! Pre-update, you could have your library on the left, your queue in the center, and your Now Playing view on the right. In other words, you could boot up the app and have everything you need all in one place. Now, you can only have one or two of these views open at once because of the new layout. 

If you’re someone who’s a fan of the Jam feature and plans to use it quite often with your mates, you’re probably not as upset as other users. But, as a person who will probably never use the Jam feature, I feel robbed of a pretty decent app layout with nothing in return. Now, I am no longer able to see how long the current song is or the album name in the queue.

It seems like Spotify users live in fear of every new update that is implemented. A common notion that’s shared on Spotify Reddit and in the blog post comments is ‘another Spotify update, another change no one asked for.’ I use Spotify every day, and I can’t remember a single update implemented to the app on mobile and desktop that didn’t make me mad. Hopefully, we can convince Spotify to change everything back to how it was – or we'll just end up waiting until another update comes around and knocks everything out of place again. 

You might also like…

TechRadar – All the latest technology news

Read More

Researchers prove ChatGPT and other big bots can – and will – go to the dark side

For a lot of us, AI-powered tools have quickly become a part of our everyday life, either as low-maintenance work helpers or vital assets used every day to help generate or moderate content. But are these tools safe enough to be used on a daily basis? According to a group of researchers, the answer is no.

Researchers from Carnegie Mellon University and the Center for AI Safety set out to examine the existing vulnerabilities of AI Large Language Models (LLMs) like popular chatbot ChatGPT to automated attacks. The research paper they produced demonstrated that these popular bots can easily be manipulated into bypassing any existing filters and generating harmful content, misinformation, and hate speech.

This makes AI language models vulnerable to misuse, even if that may not be the intent of the original creator. In a time when AI tools are already being used for nefarious purposes, it’s alarming how easily these researchers were able to bypass built-in safety and morality features.

If it's that easy … 

Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard commented on the research paper in the New York Times, stating: “This shows – very clearly – the brittleness of the defenses we are building into these systems.”  

The authors of the paper targeted LLMs from OpenAI, Google, and Anthropic for the experiment. These companies have built their respective publicly-accessible chatbots on these LLMs, including ChatGPT, Google Bard, and Claude. 

As it turned out, the chatbots could be tricked into not recognizing harmful prompts by simply sticking a lengthy string of characters to the end of each prompt, almost ‘disguising’ the malicious prompt. The system’s content filters don’t recognize and can’t block or modify so generates a response that normally wouldn’t be allowed. Interestingly, it does appear that specific strings of ‘nonsense data’ are required; we tried to replicate some of the examples from the paper with ChatGPT, and it produced an error message saying ‘unable to generate response’.

Before releasing this research to the public, the authors shared their findings with Anthropic, OpenAI, and Google who all apparently shared their commitment to improving safety precautions and addressing concerns.

This news follows shortly after OpenAI closed down its own AI detection program, which does lead me to feel concerned, if not a little nervous. How much could OpenAI care about user safety, or at the very least be working towards improving safety, when the company can no longer distinguish between bot and man-made content?

TechRadar – All the latest technology news

Read More