You are constantly getting a lot of spam in your email. The e-mail inbox is crammed with spam messages. Now, your web browser faces a new menace: slop.
Slop stands for the stupid AI content that is mindlessly shared online. Slop is not interactive and does not respond to users’ inquiries or serve their interests as chatbots do. Rather, it is programmed to appear human-generated, to generate money from advertisements, and to influence search engine results.
Deplorable as slop may be, few people want to see it, but the internet’s economy engenders its production. The AI models allow for the easy creation of bulk text or image content. Such AI can generate texts on any topic, post infinite pictures of mountains and beaches, and generate positive replies. This strategy is understandable even if only a handful of users visit the site, share the content, or click on ads since the minimal cost of generating slop is reasonable. However, just as spam has a mostly detrimental effect, so too does slop. Visitors spend too much time searching through low-quality content to get to the content they require, which is far more expensive than the few cents earned by the slop creators.
A developer Simon Willison who contributed to the popularization of the term “slop” thinks that calling this phenomenon is of great importance. The word spam did not exist before and people could not know the harmful effects of unwanted marketing messages. Willison believes that ‘slop’ will serve the same purpose – showing that posting unchecked AI content is unacceptable.
Slop can be particularly damaging when it is manifestly incorrect. For instance, an AI-written Microsoft Travel article once suggested that readers should consider the “Ottawa Food Bank” as a top destination when they travel to Canada’s capital. Sometimes, slop is so absurd that it goes viral, like the careers advice article that seriously explained an old comic punchline: ‘They pay me in worms. ’”.
Books that are written by AI are another concern. But mushroom pickers are advised to be careful and not to buy books on Amazon that presumably look like they were written by artificial intelligence and contain harmful advice for identifying edible mushrooms from poisonous ones.
AI images are widely used in Facebook accounts nowadays. Jesus with prawn limbs, children in bottle cars, fake dream homes, oldies ‘celebrating’ their 122nd birthdays, and such are circulated. Jason Koebler of 404 Media describes this as the zombie internet – where there is a proliferation of bots, there are humans, and there are millions of old accounts – where there is little or no real social interaction.
Meta’s president of global affairs Nick Clegg noted that Facebook is working on teaching algorithms to identify AI-generated posts. People want to know what is genuine and what is artificial and that is why it has become important to differentiate human and synthetic content.
AccuraCast Digital Marketing Agency’s Managing Director Farhad Divecha has experienced several instances of users reporting ads as AI-generated slop. If consumers feel that they are continuously being fed with low-quality content then this may have negative effects on the industry.
Dealing with email spam was a problem that demanded an enormous coordination effort from everyone involved in the email industry and resulted in many transformations of email systems. Modern email giants like Gmail even employ AI to filter spam messages. As for slop, the future trends are less clear. Now companies like Google have started displaying the AI answers at the top of the search results. Though these AI overviews are presented with solid safety nets, slop is proliferating throughout the rest of the internet as well.
How Secure Are AI Chatbots If Safeguards Can Be Easily Bypassed?
Researchers from the UK government have found that incentives put in place to prevent AI chatbots from generating harmful illegal or sexually suggestive content can be easily circumvented. AISI experimented with five AI models and concluded that the security of all of them against these types of attacks is extremely low.
Vulnerable to Jailbreaks
The term “jailbreak” means particular questions or text commands that allow AI models to generate responses they are supposed to circumvent. The AISI’s tests showed that even simple approaches could outsmart the safeguards. For example, simply instructing the AI with phrases such as “okay I will” proved to be sufficient to bypass the safeguards put in place by the platform.
Examples of Harmful Prompts
The researchers experimented with the models using toxic inputs from an academic paper written in 2024. These consisted of requests for the creation of harmful or inappropriate texts, such as denying the Holocaust, establishing a sexist e-mail, or encouraging a person to commit suicide. Even though the models were supposed to filter out such responses, dangerous content still showed up when these prompts were entered.
Developers’ Claims vs. Reality
AI developers say that they have effective safeguards installed. OpenAI for example has said that its technology is prohibited from producing hate speech or violent content. The same can be said about Anthropic and Meta, which have been emphasizing their interest in avoiding dangerous outcomes. However, the AISI’s conclusions also imply that such measures are not entirely effective.
Real-World Examples
These vulnerabilities have been demonstrated in real-life cases. For example, the OpenAI GPT-4 was asked to respond as if he were the late grandmother of the user, who claims to be a chemical engineer, and the system explained how to make napalm.
Broader Implications
The survey was published before the planned summit in Seoul where the leaders and experts of other countries will discuss the safety and regulation of the AI technology. This shows that there is still a need to worry about the reliability of AI systems.
Future Steps
The AISI has intentions of setting up another office based in San Francisco so that it can be closer to Meta, OpenAI, and Anthropic to enhance its research on enhancing AI safety.
In summary, while developers of AI chatbots are trying to make these systems safer, UK researchers have demonstrated that the existing solutions do not make chatbots significantly less vulnerable to simple manipulations that can lead to the creation of dangerous content. This shows that further efforts to make AI safer and more secure are necessary.
AI or Be Left Behind? Infosys CTO Rafee Tarafdar Crucial Advice for Engineering Students
Students majoring in engineering are preparing for a new career environment disrupted by artificial intelligence (AI). Infosys’s CTO, Rafee Tarafdar, tells them to do this by learning how to use AI. At a recent event in Gurugram hosted by Moneycontrol and CNBC-TV18, Tarafdar highlighted a critical difference emerging in the IT workforce: The Producers and Consumers of AI.
For new non-software engineering graduates, Tarafdar suggests AI tooling mastery. He said: ‘If an individual knows how to utilize AI tools effectively then this individual may work faster and be of greater value’.
In the case of students who have a comprehensive engineering background, Tarafdar recommends concentrating on generating AI solutions. This can include optimizing an AI model, introducing new methods, and creating an application based on an open-source or cloud-based AI model. It makes you highly relevant in the industry’ he stated adding that the industry requires both AI creators and consumers.
Infosys has implemented several Gen AI-based products within its organization in the past 18 months. For example, the company’s learning platform has been completely transitioned to the Gen AI format. Tarafdar referred to a new learning method known as Socrative learning where large language models are used to assist employees in developing reasoning skills and make their learning more adaptive.
Infosys has also offered several AI-supported code support features to its developers to facilitate the search for previous projects’ details. Further, they introduced a sales assistant feature for the employees involved in client interaction, which gives them easy access to the company’s archive, containing information on projects completed throughout its forty-plus-year history.
This is reflected in the increasing role of AI specialists and the changes in recruitment policies made by Infosys. About one year later, in April, over 50% of fresh recruits are from external sources due to the increased imperative for specialized talent in AI and similar technologies.
To sum up, AI is the key competency that every future IT professional should acquire. As users or developers of various AI systems, graduates in engineering could substantially increase their efficiency and contribution to the field. Infosys is one of the companies that deeply implement AI into their company and also highlights the role of AI skills in the recruitment process.