In recent news, London-based startup Unitary has successfully raised $15 million in a Series A funding round to further develop its cutting-edge AI-driven visual content moderation tool.
This innovative technology is designed to identify and remove harmful content from online platforms, significantly enhancing online safety for users. In this blog, we will delve into the details of Unitary’s achievement and how their AI technology works to make the internet a safer place.
Unitary’s Mission and Recent Success
Unitary, founded in 2019 by Sasha Haco and James Thewlis, has been making strides in the field of visual content moderation. Haco, a mathematician with a Ph.D. from Cambridge, and Thewlis, a computer vision specialist with experience at Facebook AI Research, joined forces through the Entrepreneur First accelerator program to create a solution that utilizes artificial intelligence to tackle the challenges of content moderation.
The recent Series A funding round, which raised $15 million, is a significant milestone for the company. This financial injection will enable Unitary to expand its operations and improve its AI-driven content moderation tool further. The round was led by Stockholm’s Creandum, with participation from Paladin Group Capital and Plural, showcasing strong support from the investment community.
AI-Powered Content Moderation
Unitary’s core technology revolves around the use of contextual AI to automate content moderation. In essence, their AI can “read” the context of user-generated videos, allowing it to distinguish between harmful and non-harmful content without human intervention. This is a game-changer for online safety, as it can swiftly identify and remove content that violates community guidelines, such as NSFW (Not Safe for Work) material or hate speech.
One of the standout features of Unitary’s technology is its ability to discern context. For example, it can differentiate between footage from a white supremacist rally and documentary footage that seeks to expose the dangers of such actions. This nuanced understanding is vital in ensuring that legitimate content isn’t mistakenly flagged or removed.
Scaling Up for a Safer Internet
To underline its commitment to enhancing online safety, Unitary has also expanded its operations. The company has grown its team to 53 members and significantly increased its content classification capacity to process a staggering 6 million videos daily. This expansion means that their AI-driven moderation tool can have a more significant impact across a wider range of languages and online communities.
Unitary’s success in securing $15 million in Series A funding is a testament to the importance of AI-driven content moderation in today’s digital landscape. With the rise of harmful and inappropriate content on the internet, its technology plays a crucial role in maintaining a safe online environment for users. As Unitary continues to grow and refine its AI-powered moderation tool, we can look forward to a safer and more enjoyable internet experience for all.
1. How does Unitary’s AI-driven content moderation work?
Unitary’s AI technology analyzes user-generated videos by “reading” their context. It can distinguish between different types of content, such as identifying hate speech or explicit material, all without human intervention.
2. Who founded Unitary, and what is their background?
Unitary was founded in 2019 by Sasha Haco and James Thewlis. Sasha Haco is a mathematician with a Ph.D. from Cambridge, and James Thewlis is a computer vision specialist with experience at Facebook AI Research.
3. What is the significance of Unitary’s recent Series A funding round?
Unitary secured $15 million in a Series A funding round, which is a major achievement for the company. This funding will allow them to expand their operations and further develop their AI-driven content moderation tool.