Controversy arises as Haiper AI, a London-based startup, is scrutinised for allowing the creation of misleading and harmful content without adequate protective measures.
In the rapidly evolving realm of artificial intelligence, particularly in the area of generative AI, one London-based startup has found itself at the centre of controversy. Haiper AI, an image and video generation platform, has come under scrutiny for allegedly allowing users to produce harmful content without adequate safeguards. This revelation comes at a time when the effectiveness of AI safety measures is increasingly being questioned.
Founded by former Google DeepMind staff, Haiper AI launched in March with an £11 million investment spearheaded by Octopus Ventures, a prominent UK venture capital firm with investments in companies such as Zoopla and Depop. Operating in a fashion akin to popular tools like OpenAI’s DALL-E, Haiper allows users to generate images and videos based on textual descriptions.
However, a probe by UKTN revealed that Haiper lacks the robust protective measures employed by its contemporaries to inhibit the creation of potentially misleading or harmful content. During tests, UKTN was able to generate images depicting scenarios such as former US President Donald Trump meeting with popstar Taylor Swift and British Prime Minister Keir Starmer waving a burning Israeli flag. These images were concerning to AI safety experts due to their potential misuse in spreading disinformation.
While many AI developers have put in place systems to prevent the misuse of their technologies, including prohibitions on creating images of individuals without consent, it appears Haiper’s safeguards are less stringent. For instance, when similar prompts were input into Meta AI’s platform, users received a message indicating restrictions against creating potentially misleading images of real people. OpenAI’s ChatGPT also restricts generating images of public figures to avoid privacy violations and misuse.
Despite Haiper’s terms of use, which stipulate user inputs should not include personal information without consent, the platform’s algorithms did not flag or prevent the creation of these contentious images during UKTN’s tests.
This situation is exacerbated by recent instances where AI-driven technology has been used to generate misleading audio and images involving political figures in the UK. These include fake audio clips targeting Mayor of London Sadiq Khan and Health Secretary Wes Streeting, as well as AI-generated images suggesting Taylor Swift endorsed Donald Trump’s campaign. Such instances have highlighted the dangers of AI-generated misinformation, raising public and political concern over the potential influence on public opinion.
The cases underscore broader issues within the AI sector, where the rapid advancement of generative technologies frequently outpaces the development of effective moderation systems. AI platforms like Stability AI have established systems that flag and restrict the creation of images involving specific individuals without consent, showcasing a contrasting approach to Haiper’s methods.
Haiper did not issue a statement in response to the concerns raised by UKTN’s findings. This silence may add to the growing apprehension among industry observers and the public over the accountability measures AI firms should deploy to combat potential misuse of their technologies.
The issue of misleading AI-generated content is not only limited to images and videos but extends so far as impacting legal actions, with notable examples including consumer finance expert Martin Lewis’s lawsuit against Facebook for hosting fake ads employing his likeness. Martin Lewis ultimately settled with Facebook, which agreed to donate £3 million to Citizens Advice as part of the resolution. This case serves as a high-profile example of the challenges posed by AI in both legal and ethical dimensions.
As AI continues to advance, the balance between innovation and safeguarding against misuse remains delicate. The case of Haiper AI further amplifies the ongoing debate regarding the responsibilities of AI companies in deploying adequate defenses to safeguard against potential abuse and harm.
Source: Noah Wire Services
More on this & verification
- https://datainnovation.org/2023/03/critics-of-generative-ai-are-worrying-about-the-wrong-ip-issues/ – Discusses the intellectual property issues surrounding generative AI, including the misuse of copyrighted content and the need for robust enforcement of existing IP rights.
- https://www.youtube.com/watch?v=De67omtnC_4 – Provides an overview of Haiper AI, its founders, and its capabilities, including text-to-video and image-to-video generation, which is relevant to the discussion of AI-generated content.
- https://journeyaiart.com/blog-ai-video-generator-free-for-all-pixverse-vs-haiper-full-review-47547 – Compares Haiper AI with other AI video generators, highlighting its features and limitations, such as the ability to generate short videos and upcoming features.
- https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda – Details the use of AI-generated images in political propaganda and their potential to spread disinformation, which aligns with concerns about Haiper AI’s safeguards.
- https://www.reddit.com/r/ArtistLounge/comments/13c1qk6/ai_art_has_ruined_art_station/ – Highlights the impact of AI-generated art on creative communities and the lack of effective measures to filter out AI-generated content, mirroring concerns about Haiper AI’s moderation.
- https://datainnovation.org/2023/03/critics-of-generative-ai-are-worrying-about-the-wrong-ip-issues/#Generative%20AI%20Does%20Not%20Excuse%20Other%20Illegal%20Acts – Addresses the broader IP and legal issues surrounding generative AI, including the need for policymakers to address harmful activities such as creating forgeries and distributing copyrighted content.
- https://journeyaiart.com/blog-ai-video-generator-free-for-all-pixverse-vs-haiper-full-review-47547#Highlights – Discusses the differences in how various AI platforms handle content generation, including the lack of robust safeguards in some platforms like Haiper AI.
- https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda#text – Explains how AI-generated images can be used to spread misinformation and the implications for public trust and political polarization.
- https://datainnovation.org/2023/03/critics-of-generative-ai-are-worrying-about-the-wrong-ip-issues/#Should%20Generative%20AI%20Systems%20Be%20Allowed%20to%20Train%20on%20Content%20Without%20the%20Copyright%20Owner’s%20Explicit%20Permission? – Analyzes the argument that generative AI systems should not train on copyrighted content without explicit permission, which is relevant to the broader discussion of AI content generation and IP rights.
- https://journeyaiart.com/blog-ai-video-generator-free-for-all-pixverse-vs-haiper-full-review-47547###%20Comparing%20Pix%20Verse%20and%20Hyper%20AI – Provides a detailed comparison between different AI video generators, highlighting the strengths and weaknesses of each, including Haiper AI’s limitations in terms of video length and moderation.
- https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda#text – Discusses the ethical and legal challenges posed by AI-generated content, including the example of Martin Lewis’s lawsuit against Facebook, which underscores the need for robust safeguards against misuse.












