These new smart glasses can teach people about the world thanks to generative AI

It was only a matter of time before someone added generative AI to an AR headset and taking the plunge is start-up company Brilliant Labs with their recently revealed Frame smart glasses.

Looking like a pair of Where’s Waldo glasses (or Where’s Wally to our UK readers), the Frame houses a multimodal digital assistant called Noa. It consists of multiple AI models from other brands working together in unison to help users learn about the world around them. These lessons can be done just by looking at something and then issuing a command. Let’s say you want to know more about the nutritional value of a raspberry. Thanks to OpenAI tech, you can command Noa to perform a “visual analysis” of the subject. The read-out appears on the outer AR lens. Additionally, it can offer real-time language translation via Whisper AI.

The Frame can also search the internet via its Perplexity AI model. Search results will even provide price tags for potential purchases. In a recent VentureBeat article, Brilliant Labs claims Noa can provide instantaneous price checks for clothes just by scanning the piece, or fish out home listings for new houses on the market. All you have to do is look at the house in question. It can even generate images on the fly through Stable Diffusion, according to ZDNET

Evolving assistant

Going back to VentureBeat, their report offers a deeper insight into how Noa works. 

The digital assistant is always on, constantly taking in information from its environment. And it’ll apparently “adopt a unique personality” over time. The publication explains that upon activating for the first time, Noa appears as an “egg” on the display. Owners will have to answer a series of questions, and upon finishing, the egg hatches into a character avatar whose personality reflects the user. As the Frame is used, Noa analyzes the interactions between it and the user, evolving to become better at tackling tasks.

Brilliant Labs Frame exploded view

(Image credit: Brilliant Labs)

An exploded view of the Frame can be found on Brilliant Labs’ official website providing interesting insight into how the tech works. On-screen content is projected by a micro-OLED onto a “geometric prism” in the lens. 9To5Google points out this is reminiscent of how Google Glass worked. On the nose bridge is the Frame’s camera sitting on a PCBA (printed circuit board assembly). 

At the end of the stems, you have the batteries inside two big hubs. Brilliant Labs states the frames can last a whole day, and to charge them, you’ll have to plug in the Mister Power dongle, inadvertently turning the glasses into a high-tech Groucho Marx impersonation.

Brilliant Labs Frame with Mister Power

(Image credit: Brilliant Labs)

Availability

Currently open for pre-order, the Frame will run you $ 350 a pair. It’ll be available in three colors: Smokey Black, Cool Gray, and the transparent H20. You can opt for prescription lenses. Doing so will bump the price tag to $ 448.There's a chance Brilliant Labs won’t have your exact prescription. They recommend to instead select the option that closely matches your actual prescription. Shipping is free and the first batch rolls out April 15.

It appears all of the AI features are subject to a daily usage cap. Brilliant Labs has plans to launch a subscription service lifting the limit. We reached out to the company for clarification and asked several other questions like exactly how does the Frame receive input? This story will be updated at a later time.

Until then, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Maps could become smarter than ever thanks to generative AI

Google Maps is getting a dose of generative AI to let users search and find places in a more conversational manner, and serve up useful and interesting suggestions. 

This smart AI tech comes in the form of an “Ask about” user interface where people can ask Google Maps questions like where to find “places with a vintage vibe” in San Francisco. That will prompt AI to analyze information, like photos, ratings and reviews, about nearby businesses and places to serve up suggestions related to the question being asked.  

From this example, Google said the AI tech served up vinyl record stores, clothing stores, and flea markets in its suggestions. These included the location along with its rating, reviews, number of times rated, and distance by car. The AI then provides review summaries that highlight why a place might be of interest. 

You can then ask follow-up questions that remember your previous query, using that for context on your next search. For example, when asked, “How about lunch?” the AI will take into account the “vintage vibe” comment from the previous prompt and use that to offer an old-school diner nearby.

Screengrabs of the new generative AI features on Google Maps showing searches and suggestions

(Image credit: Google)

You can save the suggestions or share them, helping you coordinate with friends who might all have different preferences like being vegan, checking if a venue is dog friendly, making sure it is indoors, and so on.

By tapping into the search giant’s large-language models, Google Maps can analyze detailed information using data from more than 250 million locations, and photos, ratings and reviews from its community of over 300 million contributors to provide “trustworthy” suggestions. 

The experimental feature is launching this week but is only coming to “select Local Guides” in the US. It will use these members' insights and feedback to develop and test the feature before what’s likely to be its eventual full rollout, which Google has not provided a date for.

 Does anyone want this?  

Users on the Android subreddit were very critical of the feature with some referring to AI as a buzzword that big companies are chasing for clout, user lohet stated: “Generative AI doesn't have any place in a basic database search. There's nothing to generate. It's either there or it's not.”

Many said they would rather see Google improve offline Maps and its location-sharing features. User, chronocapybara summarized the feelings of others in the forum by saying:  “If it helps find me things I'm searching for, I'm all for it. If it offloads work to the cloud, making search slower, just to give me more promoted places that are basically ads, then no.” 

However, AI integration in our everyday apps is here to stay and its inclusion in Google Maps could lead to users being able to discover brand-new places easily and helping smaller businesses gain attention and find an audience.

Until the features roll out, you can make the most of Google Maps with our 10 things you didn't know Google Maps could do

You may also like

TechRadar – All the latest technology news

Read More

Google’s new generative AI aims to help you get those creative juices following

It’s a big day for Google AI as the tech giant has launched a new image-generation engine aimed at fostering people’s creativity.

The tool is called ImageFX and it runs on Imagen 2,  Google's “latest text-to-image model” that Google claims can deliver the company's “highest-quality images yet.” Like so many other generative AIs before, it generates content by having users enter a command into the text box. What’s unique about the engine is it comes with “Expressive Chips” which are dropdown menus over keywords allowing you to quickly alter content with adjacent ideas. For example, ImageFX gave us a sample prompt of a dress carved out of deadwood complete with foliage. After it made a series of pictures, the AI offered the opportunity to change certain aspects; turning a beautiful forest-inspired dress into an ugly shirt made out of plastic and flowers. 

Image 1 of 2

ImageFX - generated dress

(Image credit: Future)
Image 2 of 2

ImageFX - generated shirt

(Image credit: Future)

Options in the Expressive Chips don’t change. They remain fixed to the initial prompt although you can add more to the list by selecting the tags down at the bottom. There doesn’t appear to be a way to remove tags. Users will have to click the Start Over button to begin anew. If the AI manages to create something you enjoy, it can be downloaded or shared on social media.

Be creative

This obviously isn’t the first time Google has released a text-to-image generative AI. In fact, Bard just received the same ability. The main difference with ImageFX is, again, its encouragement of creativity. The clips can help spark inspiration by giving you ideas of how to direct the engine; ideas that you may never have thought of. Bard’s feature, on the other hand, offers little to no guidance. Because it's less user-friendly, directing Bard's image generation will be trickier.

ImageFX is free to use on Google’s AI Test Kitchen. Do keep in mind it’s still a work in progress. Upon visiting the page for the first time, you’ll be met with a warning message telling you the AI “may display inaccurate info”, and in some cases, offensive content. If this happens to you, the company asks that you report it to them by clicking the flag icon. 

Also, Google wants people to keep things clean. They link to their Generative AI Prohibited Use Policy in the warning listing out what you can’t do with ImageFX.

AI updates

In addition to ImageFX, Google made several updates to past experimental AIs. 

MusicFX, the brand’s text-to-music engine, now allows users to generate songs up to 70 seconds in length as well as alter their speed. The tool even received Expressive Chips, helping people get those creative juices flowing. MusicFX even got a performance boost enabling it to pump content faster than before. TextFX, on the other hand, didn’t see a major upgrade or new features. Google mainly updated the website so it’s more navigable.

MusicFX's new layout

(Image credit: Future)

Everything you see here is available to users in the US, New Zealand, Kenya, and Australia. No word on if the AI will roll out elsewhere, although we did ask. This story will be updated at a later time.

Until then, check out TechRadar's roundup of the best AI art generators for 2024 where we compare them to each other. There's no clear winner, but they do have their specialties. 

You might also like

TechRadar – All the latest technology news

Read More

Meta opens the gates to its generative AI tech with launch of new Imagine platform

Amongst all the hullabaloo of Google’s Gemini launch, Meta opened the gates to its free-standing image generator website called Imagine with Meta AI.

The company has been tinkering with this technology for some time now. WhatsApp, for instance, has had a beta in-app image generator since August of this year. Accessing the feature required people to have Meta's app installed on their smartphones. But now with Imagine, all you need is an email address to create an account on the platform. Once in, you’re free to create whatever you want by entering a simple text prompt. It functions similarly to DALL-E

We tried out the website ourselves and discovered the AI will create four 1,280 x 1,280 pixel JPEG images that you can download by clicking the three dots in the upper right corner. The option will appear in the drop-down menu.

Below is a series of images we asked the engine to make. You’ll notice in the bottom left corner is a watermark stating that it was created by an AI.

Image 1 of 3

Homer according to Meta

(Image credit: Future)
Image 2 of 3

Me, according to Meta

(Image credit: Future)
Image 3 of 3

Char's Zaku, according to Meta

(Image credit: Future)

We were surprised to discover that it’s able to create content featuring famous cartoon characters like Homer Simpson and even Mickey Mouse. You’d think there would be restrictions for certain copyrighted material, but apparently not. As impressive as these images may be, there are noticeable flaws. If you look at the Homer Simpson sample, you can see parts of the picture melting into each other. Plus, the character looks downright bizarre.

Limitations (and the work arounds)

A lot of care was put into the development of Imagine. You see, it's powered by Meta's proprietary Emu learning model. According to a company research paper from September, Emu was trained on “1.1 billion images”. At the time, no one really knew the source of all this data. However, Nick Clegg, Meta’s president of global affairs, told Reuters it used public Facebook and Instagram posts to train the model. Altogether, over a billion social media accounts were scrapped.

To rein in all this data, Meta implemented some restrictions. The tech keeps things family friendly as it'll refuse prompts that are violent or sexual nor can they mention a famous person. 

Despite the tech giant’s best efforts, it’s not perfect by any stretch. It appears there is a way to get around said limitations with indirect wording. For example, when we asked Meta AI to create an image of former President Barack Obama, it refused. But, when we entered “a former US president” as the prompt, the AI generated a man that resembled President Obama. 

A former US president, according to Meta

(Image credit: Future)

There are plans to introduce “invisible watermarking… for increased transparency and traceability”, but it’s still weeks away from being released. A lot of damage can be done in that short period. Misuse is something that Meta is concerned about, however, there are still holes. We reached out asking if it aims to implement more protection. This story will be updated at a later time.

Until then, check out TechRadar's guide on the best AI art generators for the year.

You might also like

TechRadar – All the latest technology news

Read More

Generative AI could get more active thanks to this wild Stable Diffusion update

Stability AI, the developer behind the Stable Diffusion, is previewing a new generative AI that can create short-form videos with a text prompt.

Aptly called Stable Video Diffusion, it consists of two AI models (known as SVD and SVD-XT) and is capable of creating clips at a 576 x 1,024 pixel resolution. Users will be able to customize the frame rate speed to run between three and 30 FPS. The length of the videos depends on which of the twin models is chosen. If you select SVD, the content will play for 14 frames while SVD-XT extends that a bit to 25 frames. The length doesn’t matter too much as rendered clips will only play for about four seconds before ending, according to the official listing on Hugging Face.

The company posted a video on its YouTube channel showing off what Stable Video Diffusion is capable of and the content is surprisingly high quality. They're certainly not the nightmare fuel you see on other AI like Meta’s Make-A-Video. The most impressive, in our opinion, has to be the Ice Dragon demo. You can see a high amount of detail in the dragon’s scales plus the mountains in the back look like something out of a painting. Animation, as you can imagine, is rather limited as the subject can only slowly bob its head. The same can be seen in other demos. It’s either a stiff walking cycle or a slow panning shot. 

In the early stages

Limitations don’t stop there. Stable Video Diffusion reportedly cannot “achieve perfect photorealism”, it can’t generate “legible text”, plus it has a tough time with faces. Another demonstration on Stability AI’s website does show its model is able to render a man’s face without any weird flaws so it could be on a case-by-case basis.

Keep in mind that this project is still in the early stages. It’s obvious the model is not ready for a wide release nor are there any plans to do so. Stability AI emphasizes that Stable Video Diffusion is not meant “for real-world or commercial applications” at this time. In fact, it is currently “intended for research purposes only.” We’re not surprised the developer is being very cautious with its tech. There was an incident last year where Stability Diffusion’s model leaked online, leading to bad actors using it to create deep fake images.

Availability

If you’re interested in trying out Stable Video Diffusion, you can enter a waitlist by filling out a form on the company website. It’s unknown when people will be allowed in, but the preview will include a Text-To-Video interface. In the meantime, you can check out the AI’s white paper and read up on all the nitty gritty behind the project. 

One thing we found interesting after digging through the document is it mentions using “publicly accessible video datasets” as some of the training material. Again, it's not surprising to hear this considering that Getty Images sued Stability AI over data scraping allegations earlier this year. It looks like the team is striving to be more careful so it doesn't make any more enemies.

No word on when Stable Video Diffusion will launch. Luckily, there are other options. Be sure to check out TechRadar's list of the best AI video makers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Google Search’s generative AI is now able to create images with just a text prompt

Google is taking on Microsoft at its own game as the tech giant has begun testing its own image generation tool on the AI-powered Search Generative Experience (SGE).

It functions almost exactly like Bing Chat: you enter a prompt directly into Google Search, and after a few seconds, four images pop out. What’s unique about it is you can choose one of the pictures and develop it even further by editing its description to add more detail. Google gives the example of asking SGE to generate “a photorealistic image of a capybara” cooking breakfast in the forest. The demo then shows you how to alter specific aspects like changing the food the animal is cooking, from bacon to hash browns, or swapping out the backdrop from trees to the sky. 

See more

This feature won’t be locked to just Google Search as the company states you might “see an option to create AI-generated images directly in Google Images”. In that instance, one of the image search results will be replaced with a button offering access to the engine. The creation will slide in from the right in its own sub-window.

Image generation on Google Images

(Image credit: Google)

Limitations

There are some restrictions to this experiment. SGE includes safeguards that will block content that runs counter to the company’s policy for generative AI. This includes, but is not limited to, promoting illegal activities, creating misinformation, and generating anything sexually explicit that isn’t educational or “artistic”. Additionally, every picture that comes out will be marked with “metadata labeling” plus a watermark indicating it was made by an AI. 

Further down the line, AI content will receive its own About This Image description giving people important context about what they’re looking at. Google clearly does not want to be the source of misinformation on the internet.

Google states in the announcement this test is currently only available in English to American users who have opted into the SGE program. You also must be 18 years or older to use it. What isn’t mentioned is that not everyone will be given access. This includes us, which is why we’re unable to share our creations with you. 

If you’re interested in entering the program, we have a detailed guide giving step-by-step instructions on how to join SGE. It’s really easy to do. You just have to sign up on the Search Labs website on desktop or mobile. 

SGE drafts

Besides pictures, you can ask SGE to write up drafts for messages or emails if you’re not very good with words. Google gives the example of having the AI “write a note to a contractor asking for a quote” for renovating a part of your house. Once that’s done, you can take the draft into either Google Docs or Gmail where you can tweak it and give it your voice. The company states this particular content has the same level of protection as everything under the Google Workspace umbrella, so your data is safe.

Like the image generation, SGE drafts are rolling out to American users in English. No word if there are plans for an international release, although we did ask.

If you're looking for something on mobile, check out TechRadar's list of the four best AI art generator apps on iPhone.

You might also like

TechRadar – All the latest technology news

Read More

Forget ChatGPT – NExT-GPT can read and generate audio and video prompts, taking generative AI to the next level

2023 has felt like a year dedicated to artificial intelligence and its ever-expanding capabilities, but the era of pure text output is already losing steam. The AI scene might be dominated by giants like ChatGPT and Google Bard, but a new large language model (LLM), NExT-GPT, is here to shake things up – offering the full bounty of text, image, audio, and video output. 

NExT-GPT is the brainchild of researchers from the National University of Singapore and Tsinghua University. Pitched as an ‘any-to-any’ system, NExT-GPT can accept inputs in different formats and deliver responses according to the desired output in video, audio, image, and text responses. This means that you can put in a text prompt and NExT-GPT can process that prompt into a video, or you can give it an image and have that converted to an audio output. 

ChatGPT has only just announced the capability to ‘see, hear and speak’ which is similar to what NExT-GPT is offering – but ChatGPT is going for a more mobile-friendly version of this kind of feature, and is yet to introduce video capabilities. 

We’ve seen a lot of ChatGPT alternatives and rivals pop up over the past year, but NExT-GPT is one of the few LLMs we’ve seen so far that can match the text-based output of ChatGPT but also provide outputs beyond what OpenAI’s popular chatbot can currently do. You can head over to the GitHub page or the demo page to try it out for yourself. 

So, what is it like?

I’ve fiddled around with NExT-GPT on the demo site and I have to say I’m impressed, but not blown away. Of course, this is not a polished product that has the advantages of public feedback, multiple updates, and so on – but it is still very good. 

I asked it to turn a photo of my cat Miso into an image of him as a librarian, and I was pretty happy with the result. It may not be at the same level of quality as established image generators like Midjourney or Stable Diffusion, but it was still an undeniably very cute picture.

Cat in a library wearing glasses

This is probably one of the least cursed images I’ve personally generated using AI. (Image credit: Future VIA NExT-GPT)

I also tested out the video and audio features, but that didn't go quite as well as the image generation. The videos that were generated were again not awful, but did have the very obvious ‘made by AI’ look that comes with a lot of generated images and videos, with everything looking a little distorted and wonky. It was uncanny. 

Overall, there’s a lot of potential for this LLM to fill the audio and video gaps within big AI names like OpenAI and Google. I do hope that as NExT-GPT gets better and better, we’ll be able to see a higher quality of outputs and make some excellent home movies out of our cats seamlessly in no time. 

You might also like…

TechRadar – All the latest technology news

Read More

Web.com and GoDaddy join IONOS and Wix on the Generative AI integration movement

As AI-powered features become more popular across all kinds of industries, a string of website builder services are also jumping on the bandwagon.

Web.com and GoDaddy have become the latest website hosting providers to integrate new AI features into its current website building model.

Along with Wix and IONOS, who both recently announced the launch of its AI text creator and ChatGPT integration, Web.com now offers an AI domain name generator and AI writer, while GoDaddy offers three new AI products that now use generative AI.

AI website building takeover  

Web.com says that both its AI domain name generator and AI writer were developed to remove some of the initial hurdles faced when building a site. 

The tool offers a variety of content prompts and interfaces depending on content needs, making it easy to tailor content to specific needs, for example, emojis for social posts. 

“Web.com offers more than 20 years of experience in helping businesses build and grow their online presences. AI Domain Name Generator and AI Writer are an outcome of our focus on simplifying the experience for customers and our commitment to bringing forward the best set of tools, all to reduce the complexity of succeeding online,” said Ed Jay, President of Newfold Digital, the parent company of Web.com. 

“With these AI features, entrepreneurs can choose the best domains for their business and create engaging content without being copywriting experts themselves. It’s like having a dedicated creative director or copywriter at your disposal.”   

Other customizable elements include design tones, keywords, and multilingual content generation in over 10 languages, including English, Spanish, French, and Mandarin.

GoDaddy incorporates AI

GoDaddy's three new AI products that now use generative AI include: online store product descriptions, customer service messages, an Instagram Facebook Ads.

With the online store product description, a set of prompts are run through three AI models to deliver a summary that gets dropped in the item's description online.

The GoDaddy conversations app summarizes customer service messages and the new update to social platform ads includes digital ads using generative AI for small businesses.

“We've heard from small businesses who want to grow their business, but they also want to improve their work-life balance,” said GoDaddy U.S. Independents President Gourav Pani. 

“GoDaddy built these AI tools with entrepreneurs in mind. Reducing the effort to create content that attracts and engages their customers, for instance, frees up small business owners' time to focus on growing their business and devoting time to their families.”

and IONOS?

IONOS’ new ChatGPT integration into IONOS MyWebsite Now Plus and Pro has been added to help its customers create blogs, texts and headlines in seconds.

The text generator uses the application programming interface (API) of ChatGPT 3.5 Turbo from OpenAI. The new integration means that SMBs can take control of their online presence, saving valuable time and resources while creating compelling web content.

“For a technology company like IONOS, artificial intelligence isn’t totally new,” said Achim Weiss, IONOS CEO.

“However, we’re proud to integrate AI technology into a product, supporting our customers in industry-specific writing and website maintenance. AI will further drive and accelerate the digitalization of small and medium enterprises.”

The feature is being integrated as a beta version in the MyWebsite Now Plus and Pro plans and uses the API (Application Programming Interface) gpt-3.5-turbo from OpenAI.

TechRadar – All the latest technology news

Read More