What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More

Google’s AI plans hit a snag as it reportedly delays next-gen ChatGPT rival

Development on Google’s Gemini AI is apparently going through a rough patch as the LLM (large language model) has reportedly been delayed to next year.

This comes from tech news site The Information whose sources claim the project will not see a November launch as originally planned. Now it may not arrive until sometime in the first quarter of 2024, barring another delay. The report doesn’t explain exactly why the AI is being pushed back. Google CEO Sundar Pichai did lightly confirm the decision by stating the company is “focused on getting Gemini 1.0 out as soon as possible [making] sure it’s competitive [and] state of the art”. That said, The Information does suggest this situation is due to ChatGPT's strength as a rival.

Since its launch, ChatGPT has skyrocketed in popularity, effectively becoming a leading force in 2023’s generative AI wave. Besides being a content generator for the everyday user, corporations are using it for fast summarization of lengthy reports and even building new apps to handle internal processes and projections. It’s been so successful that OpenAI has had to pause sign-ups for ChatGPT Plus as servers have hit full capacity.

Plan of attack

So what is Google’s plan moving forward? According to The Information, the Gemini team wants to ensure “the primary model is as good as or better than” GPT-4, OpenAI’s latest model. That is a tall order. GPT-4 is multimodal meaning it can accept video, speech, and text to launch a query and generate new content. What’s more, it boasts overall better performance when compared to the older GPT-3.5 model, now capable of performing more than one task at a time.

For Gemini, Google has several use cases in mind. The tech giant plans on using the AI to power new YouTube creator tools, upgrade Bard, plus improve Google Assistant. So far, it has managed to create mini versions of Gemini “to handle different tasks”, but right now, the primary focus is getting the main model up and running. 

It also plans to court advertisers with their AI as advertising is “Google’s main moneymaker.” Company executives have reportedly talked about using Gemini to generate ad campaigns, including text and images. Videos could come later, too.

Bard upgrade

Google is far from out of the game, and while the company is putting a lot of work into Gemini, it's still building out and updating Bard

First, if you’re stuck on your math homework, Bard will now provide step-by-step instructions on how to solve the problem, similar to Google Search. All you have to do is ask the AI or upload a picture of the question. Additionally, the platform can create charts for you by using the data you enter into the text prompts. Or you can ask it to make a smiley face like we did.

Google Bard's new chart plot feature

(Image credit: Future)

If you want to know more about this technology, we recommend learning about the five ways that ChatGPT is better than Google Bard (and three ways it isn't).

Follow TechRadar on TikTok for news, reviews, unboxings, and hot Black Friday deals!

You might also like

TechRadar – All the latest technology news

Read More

Samsung’s Apple Vision Pro rival tipped to land alongside the Galaxy Z Flip 6

The Apple Vision Pro has become a massive talking point in the tech world, and it promises to become one of the best virtual reality headsets when it's released next year. Now, Samsung wants to get in on the action with a headset of its own, and it could be revealed alongside the Galaxy Z Flip 6 in 2024.

We already know that Samsung is working with Google and Qualcomm to launch an extended reality (XR) headset at some point in the future (extended reality is a catch-all term that covers VR, AR, and MR or mixed reality). While Samsung hasn’t given any indication of a launch timeframe, Korean outlet JoongAng (translated version) claims it will launch by the end of 2024.

Specifically, it says the headset, supposedly codenamed ‘Infinite,’ will be produced by December of next year, and we’ll get our first peek at it during one of Samsung’s Unpacked events. Samsung usually hosts two of these shows every year, but JoongAng’s source says the headset will be revealed at the event held during “the second half of next year,” which is when the Galaxy Z Flip 6 is widely tipped to make an appearance.

The headset might have launched sooner, JoongAng says, but for delays caused by “product completeness” issues. Now, though, it looks like Samsung is closing in on a firm release date.

Seriously limited production

A VR headset cla in black plastic with a simple strap and six visible cameras on its faces

(Image credit: Vrtuoluo / Samsung)

Numerous reports have suggested that Apple has seriously cut back production of its Vision Pro, from around one million units to just 400,000 headsets a year. Yet even that dwarves the number of XR headsets Samsung is set to produce.

According to JoongAng, Samsung will initially limit production of the device to just 30,000 units. This is due to the company wanting to gauge the response to its device, and assess how the industry looks after launch. In other words, Samsung wants to play it extremely safe without having to dedicate itself to a niche device in a fluctuating market.

Part of the reason for Samsung’s uncertainty might be the price. JoongAng’s report didn’t quote an expected launch price, but stated that Samsung aims to engage in a “fierce battle for leadership” in the XR space. If that’s the case, it might be planning a high-end device with a costly price tag to match. And if that’s the case, it may want to see how the industry develops before committing too heavily to its headset.

Either way, it looks as though the XR headset battle might be about to heat up, with both Samsung and Apple working on challengers to the existing incumbents like the Meta Quest Pro. Whether it will be enough for these devices to break through into the mainstream, though, is anyone’s guess.

You might also like

TechRadar – All the latest technology news

Read More

Elon Musk says xAI is launching its first model and it could be a ChatGPT rival

Elon Musk’s artificial intelligence startup company, xAI, will debut its first long-awaited AI model on Saturday, November 4.

The billionaire made the announcement on X (the platform formerly known as Twitter) stating the tech will be released to a “select group” of people. He even boasts that “in some important respects, it is the best that currently exists.”

It’s been a while since we’ve last heard anything from xAI. The startup hit the scene back in July, revealing it’s run by a team of former engineers from Microsoft, Google, and even OpenAI. Shortly after the debut on July 14, Musk held a 90-minute-long Twitter Spaces chat where he talked about his vision for the company. During the chat, Musk stated his startup will seek to create “a good AGI with the overarching purpose of just trying to understand the universe”. He wants it to run contrary to what he believes is problematic tech from the likes of Microsoft and Google. 

Yet another chatbot

AGI stands for artificial general intelligence, and it’s the concept of an AI having “intelligence” comparable to or beyond that of a normal human being. The problem is that it's more of an idea of what AI could be rather than a literal piece of technology. Even Wired in their coverage of AGIs states there’s “no concrete definition of the term”.

So does this mean xAI will reveal some kind of super-smart model that will help humanity as well as be able to hold conversations like a sci-fi movie? No, but that could be the lofty end goal for Elon Musk and his team. We believe all we’ll see on November 5 is a simple chatbot like ChatGPT. Let’s call it “ChatX” since the billionaire has an obsession with the letter “X”.  

Does “ChatX” even stand a chance against the likes of Google Bard or ChatGPT? The latter has been around for almost a year now and has seen multiple updates becoming more refined each time. Maybe xAI has solved the hallucination problem. That'll be great to see. Unfortunately, it's possible ChatX could just be another vehicle for Musk to spread his ideas/beliefs.

Analysis: A personal truth spinner

Musk has talked about wanting to have an alternative to ChatGPT that focuses on providing the “truth”, whatever that means. Musk has been a vocal critic of how fast companies have been developing their own generative AI models with seemingly reckless abandon. He even called for a six-month pause on AI training in March. Obviously, that didn’t happen as the technology advanced by leaps and bounds since then.

It's worth mentioning that Twitter, under Musk's management, has been known to comply with censorship requests by governments from around the world, so Musk's definition of truth seems dubious at best. Either way, we’ll know soon enough what the team's intentions are. Just don’t get your hopes up.

While we have you, be sure to check out TechRadar's list of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Apple is secretly spending big on its ChatGPT rival to reinvent Siri and AppleCare

Apple is apparently going hard on developing AI, according to a new report that says it’s investing millions of dollars every day in multiple AI projects to rival the likes of ChatGPT.

According to those in the know (via The Verge, citing a paywalled report at The Information), Apple has teams working on conversational AI (read: chatbots), image-generating AIs, and 'multimodel AI' which would be a hybrid of the others – being able to create video, images and text responses to queries.

These AI models would have a variety of uses, including supporting Apple Care users as well as boosting Siri’s capabilities.

Currently, the most sophisticated large language model (LLM) Apple has produced is known as Ajax GPT. It’s reportedly been trained on over 200 billion parameters, and is claimed to be more powerful than OpenAI’s GPT-3.5; this was what ChatGPT used when it first became available to the general public in 2022, though Open AI has since updated its service to GPT-4.

As with all rumors, we should take these reports with a pinch of salt. For now, Apple is remaining tight-lipped about its AI plans, and much like we saw with its Vision Pro VR headset plans, it won’t reveal anything official until it’s ready – if it even has anything to reveal.

The idea of Apple developing its own alternative to ChatGPT isn’t exactly far-fetched though – everyone and their dog in the tech space is working on AI at the moment, with Google, Microsoft, X (formerly Twitter), and Meta just a few of those with public AI aspirations.

Close-up of the Siri interface

Siri can reportedly expect a few upgrades, but when? (Image credit: Shutterstock / Tada Images)

Don't expect to see Apple AI soon

We should bear in mind that polish is everything for Apple; it doesn't release new products until it feels its got everything right, and chatbots are notoriously the antithesis of this philosophy. So much so that AI developers have a term – 'to hallucinate' – to describe when AI chatbots are incorrect, incoherent, or make information up, because they do it embarrassingly frequently. Even ChatGPT and the best ChatGPT alternatives are prone to hallucinating multiple times in a session, and even when you aren’t purposefully trying to befuddle them.

We wouldn’t be too surprised if some Apple bots started to trickle out soon, though – even as early as next month. Something like its Apple Care AI assistant would presumably have a fairly simple task of matching up user complaints with a set of common troubleshooting solutions, patching you through to a human or sending you to a real-world Apple store if it gets stumped. But something like its Ajax GPT? We’ll be lucky to see it in 2024; at least not without training wheels.

If given as much freedom as ChatGPT, Ajax could embarrass Apple and erode our  perception of the brand for delivering finely-tuned and glitch-free products out of the box. The only way we'll see Ajax soon is if AI takes a serious leap forward in terms of reliability – which is unlikely to happen quickly – or if Apple puts a boatload of limitations on its AI to ensure that it avoids making errors or wading into controversial topics. This chatbot would likely still be fine, but depending on how restricted it is, Ajax may struggle to be taken seriously as a ChatGPT rival.

Given that Apple has an event on September 12 – the Apple September event, at which we're expecting it to reveal the iPhone 15 handsets, among other products – there’s a slim chance we could hear something about its AI soon. But we wouldn’t recommend holding your breath for anything more than a few Siri updates.

Instead, we’d recommend keeping your eye on WWDC over the next few years (the company’s annual developer’s conference) to find out what AI chatbot plans Apple has up its sleeves. Just don’t be disappointed if we’re waiting until 2025 or beyond for an official update.

You might also like:

TechRadar – All the latest technology news

Read More

Apple is reportedly working on a ChatGPT rival – but you won’t see it anytime soon

Of course, Apple is working on its own generative AI, Large Language Model (LLM) and possible ChatGPT rival called, naturally, AppleGPT. Sure, the news is based on a Bloomberg report and Apple is predictably mum on the matter but, seriously, how could the Cupertino tech giant not be working on its own AI?

According to the Bloomberg report, Apple is basing its ultra-secret project on a learning framework known as Ajax, from rival and sometimes friend Google.

The effort to build some sort of chatbot and maybe other generative AI systems has been going on since late last year but, as someone who attended Apple's WWDC 2023 can tell you, Apple made no mention of chatbots of any kind at the June developer's conference.

Privacy roadblock

Apple's hyper-focus on user privacy has, as I see it, somewhat hamstrung its efforts to bring any kind of LLM-based chatbot to consumers. ChatGPT, Google Bard, and Microsoft Bing are all cloud-connected and send queries out to distant servers for rapid interpretation and response (based on the LLM's vast knowledge of how actual humans might respond under similar circumstances).

That, of course, is not the Apple way. Its Apple Silicon A16 Bionic's Neural Network is local. It does Machine Learning on your best iPhone. Sending queries with all those possibly personal details is anathema to Apple's privacy principles.

And yet, Apple clearly cannot afford to stay away from the siren call of generative AI. It is a revolution that is consuming the tech industry and the interests of average consumers and businesses. Even with the intense scrutiny AI development is under and the lawsuits some of it is facing, no one believes AI development is suddenly going to stop or go away. 

Apple has even gone as far as, according to Bloomberg, creating its own chatbot, or AppleGPT. But that's basically a highly limited and internal test and apparently not one that's ever headed to consumer desktops.

What about Siri?

Where does Siri sit in all this? 

Bloomberg claims that the Ajax work has already been used to improve Siri. That may be so, but the only Siri improvement we're getting with iOS 17 (currently in public beta) is the ability to stop starting each voice assistant prompt with “Hey.”

I have no doubt that Apple is hard at work figuring out its place in the LLM AI sphere, but it's also clear from the report that these are early days. There is no overarching strategy, and I doubt the existential question of whether or not Siri could ever host AppleGPT (or whatever it's called) has been answered.

Ultimately, this is confirmation that Apple is just as aware of what's going on around it and with competitors as ever. It will sample and test, develop and test, scrap and develop, and then test some more. I don't expect Apple to tell us anything about this during the expected September launch of the iPhone 15. However, by the time WWDC 2024 rolls around, Apple might be ready to unveil a new platform. Maybe it'll be AppleGPT-kit, AppleLLM-Kit, or even AppleGPT. 

This assumes that Apple can solve its big privacy question. If not, AppleGPT could remain in Skunkworks indefinitely.

TechRadar – All the latest technology news

Read More

Meta’s ChatGPT rival could make language barriers a thing of the past

The rise of AI tools like ChatGPT and Google Bard has presented the perfect opportunity to make significant leaps in multilingual speech projects, advancing language technology and promoting worldwide linguistic diversity.

Meta has taken up the challenge, unveiling its latest AI language model – which is able to recognize and generate speech in over 4,000 spoken languages.

The Massively Multilingual Speech (MMS) project means that Meta’s new AI is no mere ChatGPT replica. The model uses unconventional data sources to overcome speech barriers and allow individuals to communicate in their native languages without going through an exhaustive translation process.

Most excitingly, Meta has made MMS open-source, inviting researchers to learn from and expand upon the foundation it provides. This move suggests the company is deeply invested in dominating the AI language translation space, but also encourages collaboration in the field.

Bringing more languages into the conversation 

Normally, speech recognition and text-to-speech AI programs need extensive training on a large number of audio datasets, combined with meticulous transcription labels. Many endangered languages found outside industrialised nations lack huge datasets like this, which puts these languages at risk of vanishing or being excluded from translation tools.

According to Gizmochina, Meta took an interesting approach to this issue and dipped into religious texts. These texts provide diverse linguistic renditions that allow Meta to get a ‘raw’ and untapped look at lesser-known languages for text-based research.

The release of MMS as an open-source resource and research project demonstrates that Meta is devoting a lot of time and effort towards the lack of linguistic diversity in the tech field, which is frequently limited to the most widely-spoken languages.

It’s an exciting development in the AI world – and one that could bring us a lot closer to having the sort of ‘universal translators’ that currently only exist in science fiction. Imagine an earpiece that, through the power of AI, could not only translate foreign speech for you in real time but also filter out the original language so you only hear your native tongue being spoken.

As more researchers work with Meta’s MMS and more languages are included, we could see a world where assistive technology and text-to-speech could allow us to speak to people regardless of their native language, sharing information so much quicker.  I’m super excited for the development as someone trying to teach themselves a language as it’ll make real-life conversational practice a lot easier, and help ghetto grips with informal and colloquial words and phrases only native speakers would know.

TechRadar – All the latest technology news

Read More

This AI-powered Photoshop rival is the end of photography as we know it

Photoshop has been steadily adding AI-powered tools to its menus in recent years, but an incredible new demo from an independent research team shows where the best photo editors are heading next.

DragGAN may not be a fully-fledged consumer product yet, but the research paper (picked up on Twitter by AI researchers @_akhaliq and @icreatelife) shows the kinds of reality-warping photo manipulation that's going to be possible very soon. This AI-powered tech will again challenge our definition of what a photo actually is.

While we've seen similar photo editing effects before – most notably in Photoshop tools like Perspective Warp – the DragGAN demo takes the idea and user interface to a new level. As the examples below show, DragGAN lets you precisely manipulate photos to change their subject's expressions, body positions and even minor details like reflections.

The results aren't always perfect, but they are impressive – and that's because DragGAN (whose name is a combination of 'drag' and 'generative adversarial network') actually generates new pixels based on the surrounding context and where you place the 'drag' points.

Photoshop's neural filters, particularly those available in the app's beta version, have dabbled in similar effects for a while, for example giving you sliders for 'happy' and 'anger' expressions for tweaking portrait images. DxO software like Photolab also has U Point technology that lets you point at the part of a photo that you'd like to make local adjustments on.

But the power of the DragGAN demo is that it combines both concepts in a pretty user-friendly way, letting you pick the part of a photo you want to change and then completely changing your subject's pose, expression and more with very realistic results. 

When a refined version of this technology ultimately lands on smartphones, imperfect photos will be a thing of the past – as will the idea of a photo being a record of a real moment captured in time.

DragGAN also offers more granular controls, too. If you don't want to change the entire photo, you can apply a mask to a particular area – for example, your dog's head – and the algorithm will only affect that selected area. That level of control should also help reduce artifacts and errors.

The research team has also promised that in the near future it plans “to extend point-based editing to 3D generative models.” Until then, expect to see this kind of reality-warping photo editing improve at a rapid pace in some of the best Photoshop alternatives soon. 


Analysis: The next Photoshop-style revolution

A woman sitting on a beach in an early version of Photoshop

An early demo of the first version of Photoshop, showing the iconic ‘Jennifer in Paradise’ photo being edited. (Image credit: Adobe)

These AI-powered photo editing tricks have echoes of the first early demos of Photoshop over 35 years ago – and will likely have the same level of impact, both culturally and on the democratization of photo editing.

In 1987, the co-creator of Adobe Photoshop John Knoll took the photo above – one of the most significant of the last century – on a Tahiti beach and used it to demo the incredible tools that would appear in the world's most famous photo editing app.

Now we're seeing some similarly momentous demos of image-manipulating tools, from Google's Magic Eraser and Face Unblur to Photoshop's new Remove Tool, which lets you remove unwanted objects in your snaps.

But this DragGAN demo, while only at the research paper phase, does take the whole concept of 'photo retouching' up a notch. It's reforming, rather than retouching, the contents of our photos, using the original expression or pose simply as a starting point for something completely different.

Photographers may argue that this is more digital art than 'drawing with light' (the phrase that gives photography its name). But just like the original Photoshop, these AI-powered tools will change photography as we know it – whether we want to embrace them or not. 

TechRadar – All the latest technology news

Read More

Adobe rival launches major update update to creative suite

Serif has announced some major upgrades to its Affinity page layout and graphic design software – just six months after unveiling the feature-filled V2 of its award-winning creative suite. 

Dubbed V2.1, the developers are promising “better-than-ever workflow and user experience” thanks to a sweeping combination of new features and smaller, incremental updates. 

And no-one’s left behind, with the all new feature set for Affinity Designer, Affinity Photo, and Affinity Publisher, available across Windows, Mac, and iPad apps.  

What’s new in Affinity V2.1? 

The list of updates and fixes for Affinity V2.1 is vast – we did say it was major – so it’s worth checking out the full release notes.

When it comes to headline items, Affinity Designer’s new Vector Flood Fill tool and Running Headers in Affinity Publisher both make the cut. 

The former lets users fill areas created by intersecting objects and curves with one click. The latter provides a way to add the name of your document’s topic to headers and footers. Users will also finally get support for keyboard shortcuts for changing the blend mode in whatever layers they’re working on. 

Alongside the likes of snappable Vector warp modes and an Auto-select toggle come a raft of smaller tweaks. This includes balanced dash lines, an enhanced cropping tool in its photo editor and Brush Panel improvements based on community feedback. 

“We pay meticulous attention to what our customers tell us their requirements are for improved professional workflow and usability,” said Serif CEO Ashley Hewson. “Sometimes a very small improvement can make a huge difference and give somebody their best experience of using Affinity. All the new features have been requested heavily by our customers, and thousands of those users have helped us put 2.1 through its paces during the beta period.”

Existing Affinity users get the latest update to the design and DTP software completely free. For everyone else, V2.1 is available for a one-off cost that’s surprisingly affordable given the power of the tools. Cheerfully, there’s no subscriptions either. That alone is one of the main reasons why we named Publisher as the best alternatives to Adobe InDesign, while Affinity Photo is a serious contender for best Photoshop alternative. With the latest advances in the creative suite rolling out now, that position is only strengthened.  

TechRadar – All the latest technology news

Read More

Microsoft could be working on an AI-powered Windows to rival Chrome OS

Microsoft is reportedly working on a new version of its ever-successful Windows operating system – but we’re not talking about Windows 12, no sir. Instead, this is ‘CorePC’, a new project from Microsoft designed to take on Google’s ultra-efficient Chrome OS.

That's according to the good folks at our sister site Windows Central, whose sources claim the idea is to create a modular iteration of Windows, which Microsoft could then tweak and customize into different ‘editions’ that better suit specific hardware. This new version of Windows would be less resource-intensive than previously, hopefully.

CorePC (bear in mind this is a codename, and will likely not be the name of the finished OS) is rumored to also have one more trick up its sleeve: AI. Of course it’s AI – we shouldn’t be shocked, given Microsoft’s current hyperfixation on shoving popular chatbot ChatGPT into everything from the Microsoft 365 suite to the Bing search engine. Details are thin on what exactly artificial intelligence will bring to the table here, but it’s claimed to be a focus of the CorePC project.

Opinion: This could actually be really good – if Microsoft stays the course

Though this is no more than a rumor at this stage, it makes a lot of sense. For starters, this wouldn't be the first time Microsoft had experimented with building a lightweight version of Windows. 

The Windows 10X program, for instance, was supposed to be a stripped-back version of Windows 10 that cut down on features in favor of faster operation and better system security. Unfortunately for us, it was eventually canceled in 2021 and the OS never made it to our devices. There was also Windows Lite, a 2018 effort to build a lightweight Windows, which also never really saw the ‘lite’ of day.

I genuinely hope that CorePC doesn’t meet the same fate; the idea of a low-system-requirement version of Windows is an attractive one right now, with Chrome OS slowly encroaching in the budget hardware space. Hell, half of the products on our best cheap laptops list are Chromebooks at this point, and I’m a lifelong Windows devotee – I even owned a Windows phone back in the heady days of 2015 (this one, for anyone interested).

If the CorePC project specifically has the aim of creating a modernized version of Windows that can be easily adjusted to run smoothly on any device, that would be welcome. While I don’t think it will lead to the glorious return of Windows phones (a man can dream though, right?), it’d be great to see Chromebook-esque Windows laptops and tablets.

What exactly can we expect from CorePC?

Digging into the details a bit, it seems that Microsoft has an internal version of CorePC Windows already in testing. It’s barebones, running only the Edge browser with Bing AI, the Microsoft 365 suite, and Android apps – similar to how Chrome OS got access to apps from the Google Play Store back in 2016. This version of Windows is designed for super-affordable PCs and laptops designed to be used in educational environments.

That might not sound very exciting, but here’s the good part: this test build supposedly uses as much as 75% less storage space than Windows 11 and uses a split-partition install process that allows for faster updates, safer system resets, and better security thanks to dedicated read-only partitions the user (or any third-party apps) can’t access. It’s unclear at this point whether this new version runs on a conventional 64-bit structure or if it’s a more limited ARM-based build.

Considering that Windows 11 already uses between 20 and 30 gigabytes of storage space and Windows 12 looks to be jacking up the system requirements even further, the idea of a super-compact Windows edition is quite attractive – especially for use cases in education and enterprise spaces, where security is vital and a limited feature set won’t be a hurdle to everyday usage.

We’ve already seen Windows 11 scaled down for low-end hardware in the unofficial ‘Tiny11’ OS, so it’s not entirely surprising that Windows is seemingly working on an official version. Though there’s no projected release date, speculation points to 2024 so the release can coincide with the expected launch of Windows 12. In any case, I've got my fingers crossed!

TechRadar – All the latest technology news

Read More