Marketing-Börse PLUS - Fachbeiträge zu Marketing und Digitalisierung
print logo

Is AI ruining the internet?

With AI, every schoolchild can become a professional author and graphic artist. That's what many believe.
© Egypt Business Directory
 

AI content is flooding us

AI-generated content is flooding the internet, lowering quality and threatening diversity. It creates a cycle of "incestuous learning" among AI models, reducing the diversity and accuracy of content. From AI-generated ratings to deepfakes, the homogeneity of the internet is increasing, with potential negative impacts on revenue models and copyrights. Nevertheless, human creativity remains irreplaceable on the web.

Google image search flooded


AI creates fake celebrity images that dominate Google results. Loaded Media uses this technique for fake news and SEO. Google is fighting AI-driven SEO fraud, while new laws like the No AI FRAUD Act against non-consensual virtual reproductions are being developed to protect integrity online.
  
   
Influencer fraud


Corporate influencer Lara Sophie Bothur is suspected of gaining followers unusually quickly on LinkedIn, raising questions about the authenticity of her growth. Experts doubt the naturalness of this phenomenon and suspect the use of engagement pods or the purchase of followers. However, Deloitte asserts that Bothur's success is organic, driven by increased activity.

Unappetising images

Instacart is experimenting with generative AI for recipes and images, which sometimes leads to unappetising results. The AI-generated photos often show unrealistic food compositions and physically impossible details. Despite the bizarre images and sometimes incorrect recipe instructions generated by the AI, the company continues to use this technology. The partnership with OpenAI is intended to help users receive menu suggestions, but practice shows that the technology still has room for improvement in some areas. Instacart's shares have fallen since its IPO, and the use of AI remains a controversial topic.
  
  
  
Poisoned images


Nightshade, a University of Chicago project, offers artists an innovative solution to protect their work from unauthorised AI training. By "poisoning" the image data, it becomes unusable for AI models, for example by allowing them to interpret the Mona Lisa as a cat. This method is intended to force AI developers to pay for the use of licensed artworks.

FREE NEWSLETTER