Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More

Google wants secure open-source software to be the future

After attending the recent White House Open Source Software Security Summit, Google is now calling for a public-private partnership to not only fund but also staff essential open-source projects.

In a new blog post, president of global affairs and chief legal officer at both Google and Alphabet, Kent Walker laid out the search giant's plans to better secure the open-source software ecosystem.

For too long, businesses and governments have taken comfort in the assumption that open source software is generally secure due to its transparent nature. While many believe that more eyes watching can help detect and resolve problems in the open source community, some projects actually don't have many eyes on them while others have few or none at all.

To its credit, Google has been working to raise awareness of the state of open source security and the company has invested millions in developing frameworks and new protective tools. However, the Log4j vulnerability and others before it have shown that more work is needed across the ecosystem to develop new models to maintain and secure open source software.

Public-private partnership 

In his blog post, Kent proposes creating a new public-private partnership to identify a list of critical open source projects to help prioritize and allocate resources to ensure their security.

In the long term though, new ways of identifying open source software and components that may pose a system risk need to be implemented so that the level of security required can be anticipated and the appropriate resources can be provided.

At the same time, security, maintenance and testing baselines need to be established across both the public and private sector. This will help ensure that national infrastructure and other important systems can continue to rely on open source projects. These standards also should be developed through a collaborative process according to Kent with an “emphasis on frequent updates, continuous testing and verified integrity”. Fortunately, the software community has already started this work with organizations like OpenSFF working across industry to create these standards.

Now that Google has weighed in on the issue of open source security, expect other tech giants like Microsoft and Apple to propose their own ideas regarding the matter.

We've also rounded up the best open source software and the best business laptops

TechRadar – All the latest technology news

Read More