Meta is on the brink of releasing AI models it claims to have “human-level cognition” – hinting at new models capable of more than simple conversations

We could be on the cusp of a whole new realm of AI large language models and chatbots thanks to Meta’s Llama 3 and OpenAI’s GPT-5, as both companies emphasize the hard work going into making these bots more human. 

In an event earlier this week, Meta reiterated that Llama 3 will be rolling out to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg stating that we should expect the large language model “Within the next month, actually less, hopefully in a very short period, we hope to start rolling out our new suite of next-generation foundation models, Llama 3.”

Meta’s large language models are publicly available, allowing developers and researchers free and open access to the tech to create their bots or conduct research on various aspects of artificial intelligence. The models are trained on a plethora of text-based information, and Llama 3 promises much more impressive capabilities than the current model. 

No official date for Meta’s Llama 3 or OpenAI’s GPT-5 has been announced just yet, but we can safely assume the models will make an appearance in the coming weeks. 

Smarten Up 

Joelle Pineau, the vice president of AI research at Meta noted that “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.” Openai’s chief operating officer Brad Lightcap told the Finacial Times in an interview that the next GPT version would show progress in solving difficult queries with reasoning. 

So, it seems the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. Lightcap also said “We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” adding “ We’re just starting to scratch the surface on the ability that these models have to reason.”

As tech companies like OpenAI and Meta continue working on more sophisticated and ‘lifelike’  human interfaces, it is both exciting and somewhat unnerving to think about a chatbot that can ‘think’ with reason and memory. Tools like Midjourney and Sora have championed just how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in the everyday. 

With so many ethical and moral concerns still unaddressed with the current tools available right now as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you must admit it’s all starting to feel a little bit like the start of a sci-fi horror story.  

You might also like…

TechRadar – All the latest technology news

Read More

From chatterbox to archive: Google’s Gemini chatbot will hold on to your conversations for years

If you were thinking of sharing your deepest, darkest secrets with Google's freshly-rebranded family of generative AI apps, Gemini, just keep in mind that someone else might also see them. Google has made this explicitly clear in a lengthy Gemini support document where it elaborates on its data collection practices for Gemini chatbot apps across platforms like Android and iOS, as well as directly in-browser.

Google explained that it’s standard practice for human annotators to read, label, and process conversations that users have with Gemini. This information and data are used to improve Gemini to make it perform better in future conversations with users. It does clarify that conversations are “disconnected” from specific Google accounts before being seen by reviewers, but also that they’re stored for up to three years, with “related data” like user devices and languages as well as location. According to TechCrunch, Google doesn’t make it clear if these are in-house annotators or outsourced from elsewhere. 

If you’re feeling some discomfort about relinquishing this sort of data to be able to use Gemini, Google will give users some control over how and which Gemini-related data is retained. You can turn off Gemini App Activity in the My Activity dashboard (which is turned on by default). Turning off this setting will stop Gemini from saving conversations in the long term, starting when you disable this setting. 

However, even if you do this, Google will save conversations associated with your account for up to 72 hours. You can also go in and delete individual prompts and conversations in the Gemini Apps Activity screen (although again, it’s unclear if this fully scrubs them from Google's records). 

A direct warning that's worth heeding

Google puts the following in bold for this reason – your conversations with Gemini are not just your own:

Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

Google’s AI policies regarding data collection and retention are in line with its AI competitors like OpenAI. OpenAI’s policy for the standard, free tier of ChatGPT is to save all conversations for 30 days unless a user is subscribed to the enterprise-tier plan and chooses a custom data retention policy.

Google and its competitors are navigating what is one of the most contentious aspects of generative AI – the issues raised and the necessity of user data that comes with the nature of developing and training AI models. So far, it’s been something of a Wild West when it comes to the ethics, morals, and legality of AI. 

That said, some governments and regulators have started to take notice, for example, the FTC in the US and the Italian Data Protection Authority. Now’s a good time as ever for tech organizations and generative AI makers to pay attention and be proactive. We know they already do this when it comes to their corporate-orientated, paid customer models as those AI products very explicitly don’t retain data. Right now, tech companies don’t feel they need to do this for free individual users (or to at least give them the option to opt-out), so until they do, they’ll probably continue to scoop up all of the conversational data they can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

WhatsApp’s new Chat Lock feature will keep your private conversations safe

WhatsApp is currently rolling out a new Chat Lock feature that will ensure your private conversations stay that way.

The Chat Lock update takes chat threads and places them behind their own locked folder which can only be accessed via your device’s own password or biometrics. Additionally, the content of those conversations will be hidden in your notifications so nosy people can't see what you're talking about.

Meta states in the announcement post that Chat Lock is ideal for people who share an unlocked smartphone with family, or, as shown in the official trailer, have their device stolen by their annoying, little brother. To enable the protection, all you have to do is tap the name of the chat and select the locking option. To reveal those chats, “pull down on your inbox” then enter your password or biometric in order to unlock them. Pretty simple stuff.

There are plans to expand Chat Lock options “over the next few months”. Meta states it’ll be possible to lock your conversations on companion devices. Plus, users will soon be able to create custom passwords for the chat that differ from the ones on their smartphones.

As for the launch, the post doesn’t say whether or not this is a global rollout nor does it mention anything about being able to use Face ID to unlock chats. We reached out to Meta for clarification. This story will be updated if we hear back. 

Chat Lock feature on WhatsApp

The new Chat Lock feature on WhatsApp (Image credit: WhatsApp)

Room for improvement

Chat Lock joins WhatsApp’s long list of security features from Device Verification to end-to-end encryption and two-factor authentication, but that doesn’t mean things are perfect. There's always room for improvement as every now and again something goes wrong.

In this instance, we’re specifically referring to a recently discovered bug that allows WhatsApp to continuously use a phone’s microphone even if the app is closed. This was first discovered by a Twitter engineer who posted a screenshot of the app using the mic at least nine times in the early morning of May 6. Meta is aware of this but claims it isn’t their fault. Instead, the official WhatsApp Twitter account points the finger at Google, claiming there’s a bug in the Privacy Dashboard on Android. Regardless of whose fault it is, we do recommend turning off your microphone through your device’s settings menu to ensure complete privacy. 

But if that doesn’t satisfy you, check out TechRadar’s list of the best-encrypted messaging apps of 2023 for alternatives. 

TechRadar – All the latest technology news

Read More