June 2024 has been a big month for Pixel smartphones. Not only did Gemini Nano roll out to the Pixel 8a, but Google also released a huge security update to multiple models.
It addresses 50 vulnerabilities, ranging in severity from moderate to critical. One of the more insidious flaws is CVE-2024-32896, which Tom’s Guide states “is an elevation of privilege (EoP) vulnerability.”
An EoP refers to a bug or design flaw that a bad actor can exploit to gain unfettered access to a smartphone’s resources. It’s a level of access that not even a Pixel owner normally has. Even though it’s not as severe as the others, CVE-2024-32896 did warrant an extra warning from Google on the patch’s Pixel Update Bulletin page, stating it “may be under limited, targeted exploitation.”
In other words, it's likely bad actors are going to be targeting the flaw to infiltrate a Pixel phone, so it’s important that you install the patch.
Installing the fix
The rest of the patch affects other important components on the devices, such as the Pixel Firmware fingerprint sensor. It even fixes a handful of Qualcomm and Qualcomm closed-source components.
Google’s patch is ready to download for all supporting Pixel phones, and you can find the full list of models on the tech giant’s Help website here. They include but are not limited to the Pixel Fold, Pixel 7 series, and the Pixel 8 line.
To download the update, go to the Settings menu on your Pixel phone. Go to Security & Privacy, then to System & Updates. Scroll down to the Security Update and hit Install. Give your device enough time to install the patch and then restart your smartphone.
Existing on Android
It’s important to mention that the EoP vulnerability seems to exist on third-party Android hardware; however, a fix won’t come out for a while. As news site Bleeping Computer explains, the operating systems for Pixel and Android smartphones receive security updates at different times. The reason for this separate rollout is that third-party devices have their own “exclusive features and capabilities.” One comes out faster than the other.
Developers for GrapheneOS, a unique version of Android that is more focused on security, initially found the flaw in April. In a recent post on X (the platform formerly known as Twitter), the team believes non-Pixel phones probably won’t receive the patch until the launch of Android 15. If you don’t get the new operating system, the EoP bug probably won't get removed. The GrapheneOS devs claim the June update “has not been backported.”
Reactions to Apple Intelligence, which Apple unveiled at WWDC 2024, have ranged from curious to positive to underwhelmed, but whatever your views on the technology itself, a big talking point has been Apple’s emphasis on privacy, in contrast to some companies that have been offering generative AI products for some time.
Apple is putting privacy front and center with its AI offering and has been keen to talk about how Apple Intelligence – which will be integrated across iOS 18, iPadOS 18, and macOS Sequoia – would differ from its competitors by adopting a fresh approach to handling user information.
Craig Federighi, Apple’s Senior Vice President of Software Engineering, and the main presenter of the WWDC keynote, has been sharing more details about Apple Intelligence, and the company’s privacy-first approach.
Speaking to Fast Company, Federighi explained more about Apple’s overall AI ambitions, confirming that Apple is in agreement with other big tech companies that generative AI is the next big thing – as big a thing as the internet or microprocessors were when they first came about – and that we’re at the beginning of generative AI’s evolution.
Apple's commitment to AI privacy
Federighi told Fast Company that Apple is aiming to “establish an entirely different bar” to other AI services and products when it comes to privacy. He reinforced the messaging in the WWDC keynote that the personal aspect of Apple Intelligence is foundational to it and that users’ information will be under their control. He also reiterated that Apple wouldn’t be able to access your information, even while its data centers are processing it.
The practical measures that Apple is taking to achieve this begin with its lineup of Apple M-series processors, which it claims will be able to run and process many AI tasks on-device, meaning your data won’t have to leave your system. For times when that local processing power is insufficient, the task at hand will be sent to dedicated custom-built Apple servers utilizing Private Cloud Compute (PCC), offering far more grunt for requests that need it – while being more secure than other cloud products in the same vein, Apple claims.
This will mean that your device will only send the minimum information required to process your requests, and Apple claims that its servers are designed in such a way that it’s impossible for them to store your data. This is apparently because after your request is processed and returned to your device, the information is ‘cryptographically destroyed’, and is never seen by anyone at Apple.
Apple has published a more in-depth security research blog post going into more detail about PCC, which, as noted at WWDC 2024, is a system available to independent security researchers, who can access Apple Intelligence servers in order to verify Apple’s privacy and security claims around PCC.
Apple wants AI to feel like a natural, almost unnoticeable part of its software, and the tech giant is clearly keen to win the trust of those who use its products and to differentiate its take on AI compared with that of rivals.
More about ChatGPT and Apple Intelligence in China
Federighi also talks about Apple’s new partnership with OpenAI and the integration of ChatGPT into its operating systems. This is being done to give users access to industry-standard advanced models while reassuring users that ChatGPT isn’t what powers Apple Intelligence; the latter is exclusively driven by Apple’s own large language models (LLMs), which are totally distinct on Apple’s platforms, but you will be able to enlist ChatGPT for more complex requests.
ChatGPT is only ever invoked at the user’s request and with their permission, and before any requests are sent to ChatGPT you’ll have to confirm that you want to do this explicitly. Apple teamed up with OpenAI to give users this option because, according to Federighi, GPT-4o is “currently the best LLM out there for broad world knowledge.”
Apple is also considering expanding this concept to include other LLM makers in the future so that you might be able to choose from a variety of LLMs for your more demanding requests.
Federighi also talked about its plans for Apple Intelligence in China – the company’s second biggest market – and how the company is working to comply with regulations in the country to bring its most cutting-edge capabilities to all customers. This process is underway, but may take a while, as Federighi observed: “We don’t have timing to announce right now, but it’s certainly something we want to do.”
We’ll have to see how Apple Intelligence performs in practice, and if Apple’s privacy-first approach pays off. Apple has a strong track record when it comes to designing products and services that integrate so seamlessly that they become a part of our everyday lives, and it might very well be on track to continue building that reputation with Apple Intelligence.
Microsoft has already been dragged over the coals regarding its Recall functionality inbound for Windows 11 by security researchers and privacy watchdogs alike – and it’ll need a flame-retardant suit for the latest fiery outpouring against the AI-powered feature.
This comes from security expert Kevin Beaumont, as highlighted by The Verge. The site notes that Beaumont worked for Microsoft briefly a few years ago.
To recap – in case you missed it somehow – Recall is an AI feature for Copilot+ PCs, which launches later this month and acts as a photographic timeline – essentially a history of everything you’ve done on your PC, recorded via screenshots that are taken regularly in the background of Windows 11.
He’s come to the conclusion that Microsoft has made a giant mistake here, at least going by the feature as currently implemented – and it’s about to ship, of course. Indeed, Beaumont asserts that Microsoft is “probably going to set fire to the entire Copilot brand due to how poorly this has been implemented and rolled out,” no less.
So, what’s the big problem? Well, principally, it’s the lack of thought around security and how there’s a major discrepancy between Microsoft’s description of the way Recall is apparently kept watertight and what Beaumont has found.
Microsoft told media outlets a hacker cannot exfiltrate Copilot+ Recall activity remotely.Reality: how do you think hackers will exfiltrate this plain text database of everything the user has ever viewed on their PC? Very easily, I have it automated.HT detective pic.twitter.com/Njv2C9myxQMay 30, 2024
See more
As you can see in the above post on X (formerly Twitter), one of the security expert’s main beef with Microsoft is that it informed media outlets that a hacker can’t possibly nab Copilot+ Recall data remotely. In other words, an attacker would need to access the device physically, in-person – and this isn’t true.
In a long blog post on this topic, Beaumont explains: “This is wrong. Data can be accessed remotely.” Note that Recall does work entirely locally, as Microsoft said – it’s just that it isn’t impossible to tap into the data remotely, as suggested (if you can access the PC, of course).
As Beaumont elaborates, the other big problem here is the Recall database itself, which contains all the data from those screenshots and the history of your PC usage – as all of this is stored in plain text (in an SQLite database).
This makes it very easy to snaffle all the Recall-related info of exactly how you’ve been using your Windows 11 PC – assuming an attacker can get access to the device (either remotely, or in-person).
Analysis: Recall the Recall feature, or regret it
There are lots of further concerns here, too. As Microsoft pointed out when it revealed Recall, there are no limits to what can be captured in the AI-powered history of the activity on your PC (save for some slight exceptions, like Microsoft Edge’s private browsing mode – but not Chrome Incognito, tellingly).
Sensitive financial info, for example, won’t be excluded, and Beaumont further points out that auto-deleting messages in messaging apps will be screenshotted, too, so they could be accessed via a stolen Recall database. Indeed, any message you delete from the likes of WhatsApp, Signal, or whatever could be read via a Recall compromise.
But wait a minute, you might be thinking – if your PC is remotely accessed by a hacker, aren’t you in deep trouble anyway? Well, yes, that’s true – it’s not like these Recall details can be accessed unless your PC is actively exploited (though part of Beaumont’s problem is Microsoft’s apparently errant statement that any kind of remote access to Recall data wasn’t possible at all, as mentioned above).
The real kicker here is that if someone does access your PC, Recall seemingly makes it very easy for that attacker to grab all these potentially hugely sensitive details about your usage history.
While info stealer Trojans already exist and scrape victims at a large scale on an ongoing basis, Recall could enable this kind of personal data hoovering to be done ridiculously quickly and easily.
This is the crux of the criticism, as Beaumont explains it: “Recall enables threat actors to automate scraping everything you’ve ever looked at within seconds. During testing this with an off the shelf infostealer, I used Microsoft Defender for Endpoint – which detected the off the shelve infostealer – but by the time the automated remediation kicked in (which took over ten minutes) my Recall data was already long gone.”
This is a major part of the reason why Beaumont calls Recall “one of the most ridiculous security failings I’ve ever seen.”
If Microsoft doesn’t take action before it ships, mind – as there’s still time, in theory anyway, although the release of Copilot+ PCs is very close now. (However, Recall could still be kicked temporarily to touch while it’s further worked on – perhaps).
If Recall does ship as it’s currently implemented, Beaumont advises turning it off: “Also to be super clear you can disable this in Settings when it ships, and I highly recommend you do unless they rework the feature and experience.”
Herein lies another thorny issue: the AI-powered functionality is on by default. Recall is highlighted during the Copilot+ PC setup experience, and you can switch it off, but the way this is implemented means you have to tick a box to enter settings post-setup, and then turn off Recall there – otherwise, it will simply be left on. And some Windows 11 users will likely fall into the trap of not understanding what the tick box option means during setup and just end up with Recall on by default.
This is not the way a feature like this should operate – particularly given the privacy concerns highlighted here – and we’ve made our feelings on this quite clear before. Anything with wide-ranging abilities like Recall should be off by default, surely – or users should have a very clear choice presented to them during setup. Not some kind of weird ‘tick this box, jump through this hoop later’ kind of shenanigans.
OpenAI, the tech company behind ChatGPT, has announced that it’s formed a ‘Safety and Security Committee’ that’s intended to make the firm’s approach to AI more responsible and consistent in terms of security.
It’s no secret that OpenAI and CEO Sam Altman – who will be on the committee – want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.
GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple ‘modes’) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it.
Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety ‘superalignment’ team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence.
This has seemingly given OpenAI a lot to reflect on and it’s formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a ‘robust debate at this important moment.’ The first job of the committee will be to “evaluate and further develop OpenAI’s processes and safeguards” over the next 90 days, and then share recommendations with the company’s board.
What happens after the 90 days?
The recommendations that are subsequently agreed upon to be adopted will be shared publicly “in a manner that is consistent with safety and security.”
The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam D’Angelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process.
I’ll reserve my judgment for when OpenAI’s adopted recommendations are published, and I can see how they’re implemented, but intuitively, I don’t have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.
That’s a shame, and it’s unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way – even if large numbers of people are potentially going to be affected.
I’ll be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether they’re in the AI race or not, should prioritize the ethics and safety of what they’re doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where I’m standing, and unless there are real consequences, I don’t see companies like OpenAI being swayed that much to change their overall ethos or behavior.
Apple has released a new version of iTunes for Windows 11 (and Windows 10), which also includes support for the newly debuted iPad Air and iPad Pro models.
iTunes has been phased out for macOS and is no longer present on Apple’s own desktop operating system. Apple still updates iTunes pretty regularly on Windows, though, and this new update follows a release that brought in security fixes back in December 2023.
This latest iTunes update also delivers a security fix, dealing with a vulnerability that could lead to the app unexpectedly shutting down, or a malicious party leveraging “arbitrary code execution” (allowing an attacker to do nasty things to your PC, basically).
Apple's transition away from iTunes to more modern apps
In general, though, it does seem like Apple is trying to move away from iTunes in favor of its more modern media apps like Apple Music, Apple TV, and iCloud. These modern media apps are also available on Windows, and are optimized to match Windows 11’s own sleek contemporary aesthetics.
iTunes is more than a media app – it’s also a device manager that many users of Apple hardware are used to, allowing iPhone and iPad users to carry out tasks like backing up data and installing software. However, nowadays you can do that using the newer Apple Device app, which you get through the Microsoft Store as well.
There’s one caveat to consider – Apple’s new apps might not work as intended if you also have iTunes installed, as Neowin points out, so it’s advised that you pick one to use and uninstall the other.
iTunes: a timeless hub for Apple's media
Apple’s legacy media manager is a classic and still has a lot of purpose as it’s a place to manage all the media you’ve purchased from Apple including music, movies, and TV shows, as well as Apple Music.
If you prefer to continue to use iTunes, of course, you’re still in luck, as you can grab this latest version from the Microsoft Store. This will work whether you’re using Windows 11 or Windows 10, but not Windows 7. You can get older versions of iTunes from Apple’s website (but of course, you shouldn’t still be using Windows 7 for obvious reasons – the lack of security updates being the primary concern).
It’s good that Apple’s still looking out for users who might want to continue to use iTunes, and it also gives Apple a way in with customers who might prefer Windows as their PC’s operating system.
Microsoft has issued a mammoth Windows 11 update that brings fixes for around 150 security flaws in the operating system, as well as fixes for 67 Remote Code Execution (RCE) vulnerabilities. RCEs enable malicious actors to deploy their code to a target device remotely, often being able to do so without a person’s consent or knowledge – so this is a Windows 11 update you definitely want to install ASAP.
This update was rolled out on Microsoft’s Patch Tuesday (the second Tuesday of every month), a monthly update when Microsoft releases security updates.
Three of these were classed as ‘critical’ vulnerabilities, meaning that Microsoft saw them as posing a particularly hefty risk to users. According to Bleeping Computer, more than half of the RCE vulnerabilities were found in Microsoft SQL drivers; essential software components that facilitate communication between Microsoft apps and its servers, leading to speculation that the SQL drivers share a common flaw that is being exploited by malicious users.
The three vulnerabilities classed as ‘critical’ had to do with Windows Defender, ironically an app designed by Microsoft to protect users from online threats.
A possibly record-setting update
KrebsonSecurity, a security news site, claims that this security update sets a record for the number of Windows 11 issues addressed, making it the largest update Microsoft has released this year (so far) and the largest released since 2017.
Two zero-day loopholes that were cause for concern
Two zero-day vulnerabilities were also addressed by Microsoft in April’s Patch Tuesday update, and apparently, they have been exploited in malware attacks. Zero-day vulnerabilities are flaws in software that potentially harmful actors find and possibly exploit before the software’s developers discover it. The zero refers to the proverbial buffer of time that developers have in terms of urgency to develop a patch to address the issue.
Microsoft hasn’t said whether the zero-day flaws were being actively exploited, but this information was shared by Sophos (a software and hardware company) and Trend Micro (a cybersecurity platform).
One of these has been labeled CVE-2024-26234 by Microsoft, and it’s been classed as a Proxy Drive Spoofing Vulnerability. The other, CVE-2024-29988, was classed as a SmartScreen Prompt Security Feature Bypass Vulnerability.
You can see the full list of vulnerabilities in a report by Bleeping Computer. Mashable points to the fact that Windows necessitates such a vast number of patches and changes because Windows is used as the operating system on different manufacturers’ machines and has to constantly keep up with accommodating a variety of hardware configurations.
Some users might find Windows 11’s need for frequent updates annoying, which could lead them to consider alternative operating systems like macOS. If you’re sticking with Windows 11, KrebsonSecurity recommends that you back up your computer’s data before installing the update. I’m glad Microsoft continues to address bugs and security risks in Windows 11, even if that does mean we’re nagged to update the OS more than some of its competitors, and I would urge users to make sure that they install this update, which you can do through Windows Update if your PC hasn’t started this process already.
CREST’s Nick Benson examines whether financial regulators are doing enough to set mandatory minimum standards for complex, unpredictable and critical intelligence-led cyber attacks Articles RSS Feed
WhatsApp is currently testing a new in-app label letting you know whether or not a chat room has end-to-end encryption (E2EE).
WABetaInfo discovered the caption in the latest Android beta. According to the publication, it’ll appear underneath the contact and group name but only if the conversation is encrypted by the company’s “Signal Protocol” (Not to be confused with the Signal messaging app; the two are different.) The line is meant to serve as a “visual confirmation” informing everyone that outside forces cannot read what they’re talking about or listen to phone calls. WABetaInfo adds that the text will disappear after a few seconds, allowing the Last Seen indicator to take its place. At this moment, it’s unknown if the two lines will change back and forth or if Last Seen will permanently take the E2EE label’s place.
This may not seem like a big deal since it’s just four words with a lock icon. However, this small change is important because it indicates Meta is willing to embrace third-party interoperability.
📝 WhatsApp beta for Android 2.24.6.11: what’s new?WhatsApp is rolling out a feature to indicate when chats are end-to-end encrypted, and it’s available to some beta testers!Some users can get this feature by installing the previous updates.https://t.co/g2i5S7d9R1 pic.twitter.com/KsTa13z0BOMarch 9, 2024
See more
Third-party compatibility
On March 6, the tech giant published a report on its Engineering at Meta blog detailing how interoperability will work in Europe. The EU passed the Digital Markets Act in 2022 which, among other things, implemented new rules forcing major messaging platforms to let users communicate with third-party services.
Meta’s post gets into the weeds explaining how interoperability will work. The main takeaway is the company wants partners to use their Signal Protocol. The standard serves as the basis for E2EE on WhatsApp and Messenger, so they want everyone to be on the same playing field.
Other services don’t have to use Signal. They can use their compatible protocols, although they must demonstrate they offer “the same security guarantees”.
The wording here is pretty cut and dry: if a service doesn’t have the same level of protection, then WhatsApp won’t communicate with it. However, the beta suggests Meta is willing to be flexible. They may not completely shut out non-Signal-compliant platforms. At the very least, the company will inform its users that certain chat rooms may not be as well protected as the ones with E2EE enabled.
Interested Android owners can install the update from the Google Play Beta Program although there is a chance you may not receive the feature. WABetaInfo states it’s only available to a handful of testers. No word if WhatsApp on iOS will see the same patch.
In a new security post (via BleepingComputer), Apple says that iOS 17.4 and iPadOS 17.4 resolve two zero-day bugs in the iOS kernel and Apple’s RTKit that might allow an attacker to bypass your device’s kernel memory protections. That could potentially give malicious actors very high-level access to your device, so it’s imperative that you patch your iPhone as soon as possible by opening the Settings app, going to General > Software Update and following the on-screen instructions.
These issues are not just hypothetical; Apple says it is “aware of a report that this issue may have been exploited” in both cases, and if a zero-day flaw has been actively exploited it means hackers have been able to take advantage of these issues without anyone knowing. With that in mind, there’s every reason to update your device now that Apple has issued a set of fixes.
Apple says the bugs affect a wide range of devices: the iPhone XS and later, iPad Pro 12.9-inch 2nd generation and later, iPad Pro 10.5-inch, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 6th generation and later, and iPad mini 5th generation and later. In other words, a lot of people are potentially impacted.
Actively exploited
Zero-day flaws like these are usually exploited in targeted attacks, often by sophisticated state-sponsored groups. Apple didn’t share any details of how or when these vulnerabilities were put to nefarious use, nor whether they were discovered by Apple’s own security teams or by external researchers.
Apple devices are known for their strong defenses, but are increasingly falling under hackers’ crosshairs. Recent research suggests that there were 20 active zero-day flaws targeting Apple products in 2023 – double the number of the previous year. According to BleepingComputer, three zero-day attacks on Apple devices have been patched so far in 2024.
This kind of exploit demonstrates why it’s so important to keep all of your devices updated with the latest patches, especially if they include security fixes. Leaving yourself vulnerable is a dangerous gamble when there are extremely sophisticated hacking groups out there in the wild. With that in mind, make sure you download the latest iOS 17.4 update as soon as you can.