OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

Google Lens just got a powerful AI upgrade – here’s how to use it

We've just seen the Samsung Galaxy S24 series unveiled with plenty of AI features packed inside, but Google isn't slowing down when it comes to upgrading its own AI tools – and Google Lens is the latest to get a new feature.

The new feature is actually an update to the existing multisearch feature in Google Lens, which lets you tweak searches you run using an image: as Google explains, those queries can now be more wide-ranging and detailed.

For example, Google Lens already lets you take a photo of a pair of red shoes, and append the word “blue” to the search so that the results turn up the same style of shoes, only in a blue color – that's the way that multisearch works right now.

The new and improved multisearch lets you add more complicated modifiers to an image search. So, in Google's own example, you might search with a photo of a board game (above), and ask “what is this game and how is it played?” at the same time. You'd get instructions for playing it from Google, rather than just matches to the image.

All in on AI

Two phones on an orange background showing Google Lens

(Image credit: Google)

As you would expect, Google says this upgrade is “AI-powered”, in the sense that image recognition technology is being applied to the photo you're using to search with. There's also some AI magic applied when it comes to parsing your text prompt and correctly summarizing information found on the web.

Google says the multisearch improvements are rolling out to all Google Lens users in the US this week: you can find it by opening up the Google app for Android or iOS, and then tapping the camera icon to the right of the main search box (above).

If you're outside the US, you can try out the upgraded functionality, but only if you're signed up for the Search Generative Experience (SGE) trial that Google is running – that's where you get AI answers to your searches rather than the familiar blue links.

Also just announced by Samsung and Google is a new Circle to Search feature, which means you can just circle (or scribble on) anything on screen to run a search for it on Google, making it even easier to look up information visually on the web.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini is its most powerful AI brain so far – and it’ll change the way you use Google

Google has announced the new Gemini artificial intelligence (AI) model, an AI system that will power a host of the company’s products, from the Google Bard chatbot to its Pixel phones. The company calls Gemini “the most capable and general model we’ve ever built,” claiming it would make AI “more helpful for everyone.”

Gemini will come in three 'sizes': Ultra, Pro and Nano, with each one designed for different uses. All of them will be multimodal, meaning they’ll be able to handle a wide range of inputs, with Google saying that Gemini can take text, code, audio, images and video as prompts.

While Gemini Ultra is designed for extremely demanding use cases such as in data centers, Gemini Nano will fit in your smartphone, raising the prospect of the best Android smartphones gaining a significant AI advantage.

With all of this new power, Google insists that it conducted “rigorous testing” to identify and prevent harmful results arising from people’s use of Gemini. That was challenging, the company said, because the multimodal nature of Gemini means two seemingly innocuous inputs (such as text and an image) can be combined to create something offensive or dangerous.

Coming to all your services and devices

Google has been under pressure to catch up with OpenAI’s ChatGPT and its advanced AI capabilities. Just a few days ago, in fact, news was circulating that Google had delayed its Gemini announcement until next year due to its apparent poor performance in a variety of languages. 

Now, it turns out that news was either wrong or Google is pressing ahead despite Gemini’s rumored imperfections. On this point, it’s notable that Gemini will only work in English at first.

What does Gemini mean for you? Well, if you use a Pixel 8 Pro phone, Google says it can now run Gemini Nano, bringing all of its AI capabilities to your pocket. According to a Google blog post, Gemini is found in two new Pixel 8 Pro features: Smart Reply in Gboard, which suggests message replies to you, and Summarize in Recorder, which can sum up your recorded conversations and presentations.

The Google Bard chatbot has also been updated to run Gemini, which the company says is “the biggest upgrade to Bard since it launched.” As well as that, Google says that “Gemini will be available in more of our products and services like Search, Ads, Chrome and Duet AI” in the coming months, Google says.

As part of the announcement, Google revealed a slate of Gemini demonstrations. These show the AI guessing what a user was drawing, playing music to match a drawing, and more.

Gemini vs ChatGPT

Google Gemini revealed at Google I/O 2023

(Image credit: Google)

It’s no secret that OpenAI’s ChatGPT has been the most dominant AI tool for months now, and Google wants to end that with Gemini. The company has made some pretty bold claims about its abilities, too.

For instance, Google says that Gemini Ultra’s performance exceeds current state-of-the-art results in “30 of the 32 widely-used academic benchmarks” used in large language model (LLM) research and development. In other words, Google thinks it eclipses GPT-4 in nearly every way.

Compared to the GPT-4 LLM that powers ChatGPT, Gemini came out on top in seven out of eight text-based benchmarks, Google claims. As for multimodal tests, Gemini won in all 10 benchmarks, as per Google’s comparison.

Does this mean there’s a new AI champion? That remains to be seen, and we’ll have to wait for more real-world testing from independent users. Still, what is clear is that Google is taking the AI fight very seriously. The ball is very much in OpenAI’s (and Microsoft's) court now.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s AI tinkering continues with powerful new GPT-4 Turbo upgrade for Copilot in Windows 11

Bing AI, which Microsoft recently renamed from Bing Chat to Copilot – yes, even the web-based version is now officially called Copilot, just to confuse everyone a bit more – should get GPT-4 Turbo soon enough, but there are still issues to resolve around the implementation.

Currently, Bing AI runs GPT-4, but GPT-4 Turbo will allow for various benefits including more accurate responses to queries and other important advancements.

We found out more about how progress was coming with the move to GPT-4 Turbo thanks to an exchange on X (formerly Twitter) between a Bing AI user and Mikhail Parakhin, Microsoft’s head of Advertising and Web Services.

As MS Power User spotted, Ricardo, a denizen of X, noted that they just got access to Bing’s redesigned layout and plug-ins, and asked: “Does Bing now use GPT-4 Turbo?”

As you can see in the below tweet, Parakhin responded to say that GPT-4 Turbo is not yet working in Copilot, as a few kinks still need to be ironed out.

See more

Of course, as well as Copilot on the web (formerly Bing Chat), this enhancement will also come to Copilot in Windows 11, too (which is essentially Bing AI – just with bells and whistles added in terms of controls for Windows and manipulating settings).


Analysis: Turbo mode

We’re taking the comment that a ‘few’ kinks are still to be resolved as a suggestion that much of the work around implementing GPT-4 Turbo has been carried out. Meaning that GPT-4 Turbo could soon arrive in Copilot, or we can certainly keep our fingers crossed that this is the case.

Expect it to bring in more accurate and relevant responses to queries as noted, and it’ll be faster too (as the name suggests). As Microsoft observes, it “has the latest training data with knowledge up to April 2023” – though it’s still in preview. OpenAI only announced GPT-4 Turbo earlier this month, and said that it’s also going to be cheaper to run (for developers paying for GPT-4, that is).

In theory, it should represent a sizeable step forward for Bing AI, and that’s something to look forward to hopefully in the near future.

You might also like …

TechRadar – All the latest technology news

Read More

Google Photos update could make it a powerful new reminders app

Google Photos continues to get smarter and it could soon gain the ability to let you set reminders for certain tasks and events, all from within the photo management app. 

There’s already a myriad of smart AI-powered options within Google Photos, from being able to extract text from an image, translate languages, and use the Google Lens feature to pick out even more information in photos and search Google for highlighted items. But this forthcoming reminder function, spotted by The SpAndroid, continues to build out the Photos app into more than just a place to store, edit and peruse shots. 

Much like the “Copy text”, “Search” and “Listen”  ‘chips’ (aka prompts) that pop up to offer you various options, an incoming Google Photos update could soon serve up the option to “Set reminder”. 

Tapping this effectively lets you create a calendar entry for a corresponding Google Calendar app. So let's say you snapped a photo of a restaurant board offering specials on certain days, you could use the new feature to then set a calendar reminder to check out the restaurant on a particular day.

As someone who snaps photos on his phone to serve as reminders and reference points, this new feature seems particularly handy. Sure, it’s not hard to bounce into a calendar app and set your own reminder, but being able to do things with fewer taps or swipes through app menus is certainly appealing to me. And it also means the information you’re after is right in front of you, rather than forcing you to bounce between apps.

Unfortunately, this reminder feature doesn't appear to have rolled out widely yet, with it not popping up in Google Photos on my iPhone 13 Pro or Google Pixel 7 Pro. However such updates can take time to roll out worldwide. I’m running Google Photos version 6.60, so I may need to wait until version 6.61 as that was used by The SpAndroid to test the reminder feature.

Ever smarter software 

Given Google is pushing AI-powered tools into its software, as well as Pixel phones with the Pixel 8 Pro at the top of the pile, it’s no surprise to see it bolster Photos with AI-centric features. 

It might seem creepy that Google could extract all manner of information from your smartphone snaps, but these tools can be very handy at times, letting you do more with less back and forth between apps. 

I’m actually keen to see Google do more in embracing interoperability between its app ecosystem. I’d like Google Maps to pull Google Photos into my timeline so I can better retrace my steps when trying to remember where I went and when; you can manually add photos to Maps and map locations can be automatically added to photos, but it doesn't quite feel like there’s perfect harmony between the apps. 

Nevertheless, it’s neat to see how Google Photos continues to evolve. I only hope it sticks to the side of being handy and not fall into the realms of creepiness. 

You might also like

TechRadar – All the latest technology news

Read More

Bing AI could soon be much more versatile and powerful thanks to plug-ins

Microsoft’s Bing AI may be close to finally getting plug-ins, a feature that has been experimented with before, and will make the chatbot considerably more versatile and powerful (in theory, anyway).

Windows Latest reports that the update to add plug-ins has rolled out to a ‘small’ number of Bing Chat users over the weekend, and the tech site was one of those to get access.

Note that it appears the rollout is only happening for those using the Canary version of Microsoft’s Edge browser (and Windows Latest only got the feature in that preview release, not in the finished version of Edge).

We’re told that the AI currently offers five plug-ins to testers and you can pick any three of those to use in a session. If you want to change plug-ins, you’ll need to start a new Bing Chat session.

Windows Latest carried out some testing with a couple of those plug-ins, and the results seemed useful, with the OpenTable add-on providing some restaurant recommendations in a query.

Other plug-ins available in testing include Kayak, Klarna, and a shopping add-on for buying suggestions – we’ve already got you covered there, of course, especially for the imminent Black Friday sale – but it may be the case that different plug-ins appear for different users.


Analysis: Faster and better

Eventually, of course, there will be a whole load of plug-ins for the Bing AI, or that’s certainly Microsoft’s plan, although they’ll doubtless be rolled out in stages over time. One of those will be the much-awaited ‘no search’ function that was switched to be implemented via a plug-in not so long ago. (This allows the user to specify that the AI can’t use search content scraped from the web in its responses).

We’ve seen plug-ins in a limited test rollout before (in August), but they were pulled, so this is effectively a return of the feature – hinting it might arrive sooner rather than later.

Fingers crossed, and the good news is that Windows Latest observes that these new plug-ins seem to be more responsive and work better than the old efforts (performance-related concerns are likely one of the reasons that the test plug-ins got pulled earlier this year).

You might also like …

TechRadar – All the latest technology news

Read More

ChatGPT Plus gets big upgrade that makes it more powerful and easier to use

ChatGPT is undoubtedly one of the best artificial intelligence (AI) tools you can use right now, but a new update could make it even better by increasing the range of file types it can work with, as well as making it a little more independent when it comes to switching modes.

The changes are currently being tested in beta and are expected to come to ChatGPT Plus, the paid-for version of OpenAI’s chatbot that costs $ 20 / £16 / AU$ 28 a month. As detailed by ChatGPT user luokai on Threads (via The Verge), these changes could make a big difference to how you use the AI tool.

Specifically, ChatGPT Plus members are now able to upload various files that the chatbot can use to generate results. For instance, luokai demonstrated how ChatGPT can analyze a PDF that a user uploads, then answer a question based on the contents of that PDF.

Elsewhere, the beta version of ChatGPT can create images based on a picture uploaded by a user. That could make the chatbot much more able to generate the type of content you’re after, without just having to solely rely on your prompt or description.

Automatic mode switching

ChatGPT responding to the prompt 'is there life after death?'

(Image credit: Shutterstock / Ascannio)

That’s not all this beta update brings with it. As well as file analysis, ChatGPT could soon be able to switch modes without any user input, in a move that might make the tool much less cumbersome to use.

Right now, you need to tell ChatGPT exactly what mode you want to use, such as Browse with Bing. In the current beta, though, ChatGPT is able to determine the mode automatically based on your conversation with the chatbot.

That can extend to generating Python code or opting to use Dall-E to generate an image too, meaning you should be able to get results much closer to what you wanted without having to make an educated guess as to the best mode to use.

All of these changes could make OpenAI’s chatbot much easier to use if you’re a ChatGPT Plus subscriber. There’s no word yet on when the features will be fully rolled out, so stay tuned for more news on that front.

You might also like

TechRadar – All the latest technology news

Read More

The AI backlash begins: artists could protect against plagiarism with this powerful tool

A team of researchers at the University of Chicago has created a tool aimed to help online artists “fight back against AI companies” by inserting, in essence, poison pills into their original work.

Called Nightshade, after the family of toxic plants, the software is said to introduce poisonous pixels to digital art that messes with the way generative AIs interpret them. The way models like Stable Diffusion work is they scour the internet, picking up as many images as they can to use as training data. What Nightshade does is exploit this “security vulnerability”. As explained by the MIT Technology Review, these “poisoned data samples can manipulate models into learning” the wrong thing. For example, it could see a picture of a dog as a cat or a car as a cow.

Poison tactics

As part of the testing phase, the team fed Stable Diffusion infected content and “then prompted it to create images of dogs”. After being given 50 samples, the AI generated pictures of misshapen dogs with six legs. After 100, you begin to see something resembling a cat. Once it was given 300, dogs became full-fledged cats. Below, you'll see the other trials.

Nightshade tests

(Image credit: University of Chicago/MIT Technology Review)

The report goes on to say Nightshade also affects “tangentially related” ideas because generative AIs are good “at making connections between words”. Messing with the word “dog” jumbles similar concepts like puppy, husky, or wolf. This extends to art styles as well. 

Nightshade's tangentially related samples

(Image credit: University of Chicago/MIT Technology Review)

It is possible for AI companies to remove the toxic pixels. However as the MIT post points out, it is “very difficult to remove them”. Developers would have to “find and delete each corrupted sample.” To give you an idea of how tough this would be, a 1080p image has over two million pixels. If that wasn’t difficult enough, these models “are trained on billions of data samples.” So imagine looking through a sea of pixels to find the handful messing with the AI engine.

At least, that’s the idea. Nightshade is still in the early stages. Currently, the tech “has been submitted for peer review at [the] computer security conference Usenix.” MIT Technology Review managed to get a sneak peek.

Future endeavors

We reached out to team lead, Professor Ben Y. Zhao at the University of Chicago, with several questions. 

He told us they do have plans to “implement and release Nightshade for public use.” It’ll be a part of Glaze as an “optional feature”. Glaze, if you’re not familiar, is another tool Zhao’s team created giving artists the ability to “mask their own personal style” and stop it from being adopted by artificial intelligence. He also hopes to make Nightshade open source, allowing others to make their own venom.

Additionally, we asked Professor Zhao if there are plans to create a Nightshade for video and literature. Right now, multiple literary authors are suing OpenAI claiming the program is “using their copyrighted works without permission.” He states developing toxic software for other works will be a big endeavor “since those domains are quite different from static images. The team has “no plans to tackle those, yet.” Hopefully someday soon.

So far, initial reactions to Nightshade are positive. Junfeng Yang, a computer science professor at Columbia University, told Technology Review this could make AI developers “respect artists’ rights more”. Maybe even be willing to pay out royalties.

If you're interested in picking up illustration as a hobby, be sure to check out TechRadar's list of the best digital art and drawing software in 2023.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new photo editor looks even more powerful than Google’s Magic Editor

Adobe MAX 2023 is less than a week away, and to promote the event, the company recently published a video teasing its new “object-aware editing engine” called Project Stardust.

According to the trailer, the feature has the ability to identify individual objects in a photograph and instantly separate them into their own layers. Those same objects can then be moved around on-screen or deleted. Selecting can be done either manually or automatically via the Remove Distractions tool. The software appears to understand the difference between the main subjects in an image and the people in the background that you want to get rid of.

What’s interesting is moving or deleting something doesn’t leave behind a hole. The empty space is filled in most likely by a generative AI model. Plus, you can clean up any left-behind evidence of a deleted item. In its sample image, Adobe erases a suitcase held by a female model and then proceeds to edit her hand so that she’s holding a bouquet of flowers instead.  

Image 1 of 2

Project Stardust editing

(Image credit: Adobe)
Image 2 of 2

Project Stardust generative AI

(Image credit: Adobe)

The same tech can also be used to change articles of clothing in pictures. A yellow down jacket can be turned into a black leather jacket or a pair of khakis into black jeans. To do this, users will have to highlight the piece of clothing and then enter what they want to see into a text prompt. 

Stardust replacement tool

(Image credit: Adobe)

AI editor

Functionally, Project Stardust operates similarly to Google’s Magic Editor which is a generative AI tool present on the Pixel 8 series. The tool lets users highlight objects in a photograph and reposition them in whatever manner they please. It, too, can fill gaps in images by creating new pixels. However, Stardust feels much more capable. The Pixel 8 Pro’s Magic Eraser can fill in gaps, but neither it nor Magic Editor can’t generate content. Additionally, Google’s version requires manual input whereas Adobe’s software doesn’t need it.

Seeing these two side-by-side, we can’t but wonder if Stardust is actually powered by Google’s AI tech. Very recently, the two companies announced they were entering a partnership “and offering a free three-month trial for Photoshop on the web for people who buy a Chromebook Plus device. Perhaps this “partnership” runs a lot deeper than free Photoshop considering how similar Stardust is to Magic Editor.

Impending reveal

We should mention that Stardust isn't perfect. If you look at the trailer, you'll notice some errors like random holes in the leather jacket and strange warping around the flower model's hands. But maybe what we see is Stardust in an early stage. 

There is still a lot we don’t know like whether it's a standalone app or will it be housed in, say, Photoshop? Is Stardust releasing in beta first or are we getting the final version? All will presumably be answered on October 10 when Adobe MAX 2023 kicks off. What’s more, the company will be showing other “AI features” coming to “Firefly, Creative Cloud, Express, and more.”

Be sure to check out TechRadar’s list of the best Photoshop courses online for 2023 if you’re thinking of learning the software, but don’t know where to start. 

You might also like

TechRadar – All the latest technology news

Read More

Taiwan highlights powerful AI and cloud products with the Taiwan Excellence Awards

In the internet age, reliable connectivity and networks are paramount. There’s a great deal that goes into keeping the back-end of a network running, and then more on top of that necessary to provide quality services to thousands or millions of users connecting to it every day. Recently, AI and cloud applications have been on the rise to meet these needs. And in the past year, a few particular products stood out from the pack as offering more innovation and life-changing impacts than the rest, earning them the Taiwan Excellence Award for 2023. 

Over the last 30 years, Taiwan has highlighted its domestic products with the Taiwan Excellence Awards, which go to products offering technical innovation and real, life-changing impact. It is this focus on innovative value that the International Trade Administration (TITA) and Taiwan External Trade Development Council (TAITRA) seek to highlight across the world. 

Each year, Taiwan Excellence Awards selects only a handful of innovative products that have met rigorous criteria for R&D, design, quality, and marketing. The selected products receive the Taiwan Excellence Award mark, which serves as a distinction and testament to their high standards. 

These products, designed and made in Taiwan embody a combination of innovation and excellence reflecting the following aspects:

  • Innovative: Products that prioritize customer satisfaction through advanced and thoughtful design.
  • Excellence: Companies that are passionate about common goals, foster team spirit, enthusiasm, and knowledge-sharing.
  • Value: Practical products that truly enhance everyday life
  • Dependable: Reliable products with quality and performance you can count on.

The highlights in AI and cloud computing in this year’s Taiwan Excellence Awards are the PCIe 4.0 Enterprise SSD Controller IC from Phison Electronics, the LoRa AIoT Network Solution from Planet Technology, and SysTalk.Chat from TPIsoftware.

Phison’s PCIe 4.0 Enterprise SSD Controller IC — PS5020-E20 (X1 Solution) 

Phison’s X1 PCIe 4.0 Enterprise SSD Controller IC

(Image credit: Phison Electronics Corp)

Phison PS5020-E20 is a next-generation high-performance enterprise class SSD controller and solution that will help reduce the total cost of ownership (TCO) in enterprise use case scenarios with increased storage density, low power consumption, and high performance. The advanced solution enables smarter storage infrastructure in wide-ranging enterprise storage applications such as hyperscale data centers, high-performance computing (HPC), and artificial intelligence (AI). 

With our cutting-edge ASIC designs and engineering know-hows combined with strong synergy with our partners, Phison looks to answer to demanding enterprise SSD market needs while working toward leading positions with state-of-the-art enterprise-grade NAND flash memory controller IC solutions. PS5020-E20-powered NAND storage solutions are fine-tuned for specific workloads and applications like AI, cloud storage, and 5G edge computing to give enterprise customers the biggest bang for their buck.

Planet Technology’s LoRa AIoT Network Solution

Planet Technology’s LoRa AIoT Network Solution

(Image credit: PLANET Technology Corporation)

PLANET LoRa AIoT Network Solution is specially designed for the efficient management of long-range IoT network infrastructure. It is flexible to be used in a variety of vertical applications such as factories, smart cities, energy facilities, and transportation. It is a cost-effective and reliable method for ISPs and enterprises to integrate the existing AIoT network deployment with new network devices and multiple communication protocols. 

The system can integrate using LoRa LPWAN, Wi-Fi 6 and optical fiber connections, making it exceedingly versatile, and it’s compatible with industrial-grade VPNs (IPSec/PPTP/L2TP over IPSec) and cybersecurity functions. The units offer a rugged industrial design, support MQTT and Modbus TCP, and come pre-configured for EU868/US915/AS923 MHz Sub 1G frequency bands. Serving as a comprehensive solution for networks, it offers an intelligent central management platform with a user-friendly web UI, a secure communication transmission gateway, and data conversion equipment. 

SysTalk.Chat

SysTalk.Chat

(Image credit: TPIsoftware Corporation)

TPIsoftware's SysTalk.Chat is the most powerful conversational AI service delivering smart customer service. It has been awarded Taiwan Excellence Awards 2023, CX Asia Excellence Awards 2021 in Singapore and many more. TPIsoftware has also been selected as a Cool Vendor in Conversational AI in APAC 2018 by Gartner. With a high market penetration, SysTalk.Chat has freed up employees’ time for higher-value work, saved 40% of labor cost for enterprises, helped enterprises expand omnichannel strategy, and increased customer satisfaction.

SysTalk.Chat has been widely adopted across industries, serving millions of monthly users in Taiwan with a robust customer experience. Built with the exclusive dual brain NLU + FAQ and voice recognition, SysTalk.Chat can provide human-like dialogue and activate voice recognition with minimal corpus data. What’s more, SysTalk.Chat is also a visual chatbot that stands out for its visual recognition capability that can interpret all kinds of documents with built-in AI-OCR technology. 

SysTalk.Chat can further integrate with TPIsoftware's DigiFusion, an iPaaS middle platform featuring API management to connect other existing services, which allows services such as bank transactions to be done simply through chatbot conversations, making customer service and the conversational service blueprint more comprehensive.  

Made in Taiwan: An environment that fosters innovation

The companies showcased here have forged everyday excellence into the DNA of their products. The Taiwan Excellence Awards showcase Taiwan's belief that innovative, cutting-edge technology can lead the way to a better future. The winners of the 2023 Taiwan Excellence Awards also demonstrate how Taiwan's work to build an innovative economic landscape is bearing fruit.

For more information, visit Taiwan Excellence and see the best made in Taiwan. 

TechRadar – All the latest technology news

Read More