[ad_1]
Google has long preached the gospel of “content written by people, for people.” But in a recent update, the search giant is quietly rewriting its own rules to acknowledge the rise of artificial intelligence.
In the latest iteration of Google Search’s “Helpful Content Update,” the phrase “written by people” has been replaced by a statement that Google is constantly monitoring “content created for people” to rank sites on its search engine.
The new language shows that Google recognizes AI as a tool heavily relied upon in content creation. But instead of simply focusing on distinguishing AI from human content, the leading search engine wants to highlight valuable content that benefits users, regardless of whether humans or machines produced it.
Google is meanwhile investing in AI across its products, including an AI-powered news generator service along with its own AI chatbot Bard and new experimental search features. Updating its guidelines, then, also aligns with the company’s own strategic direction.
The search leader still aims to reward original, helpful, and human content that provides value to users.
“By definition, if you’re using AI to write your content, it’s going to be rehashed from other sites,” Google Search Relations team lead John Mueller noted on Reddit.
To SEO or not to SEO?
The implications are clear: repetitive or low-quality AI content could still hurt SEO, even as the technology advances. Writers and editors must still play an active role in the process of content creation. The lack of human involvement is risky because AI models have a tendency to hallucinate. Some of the errors may be funny or offensive, but some of them can cost millions of dollars and even put lives in danger.
SEO, or search engine optimization, refers to strategies aimed at improving a website’s rankings in search engines like Google. Higher rankings lead to increased visibility and traffic. SEO experts have long tried to “beat” search algorithms by optimizing content to match the Google algorithm.
Google seems to be penalizing the use of AI for simple content summarization or rephrasing, and has its own ways of detecting AI-generated content.
“This classifier process is entirely automated, using a machine-learning model.” Google says, meaning it’s using AI to tell good and bad content apart.
However, part of the challenge is that detecting AI content often relies on imprecise tools. OpenAI itself removed its own AI classifier recently, acknowledging its inaccuracy. AI detection is hard because models are actually trained to “appear” human, so the confrontation between content generators and content discriminators will never end because AI models only get more powerful and accurate as time goes by.
Further, training AI with AI-generated content generations can lead to model collapse.
Google says it is not trying to reproduce AI-generated data, but rather identify it and reward human-written content accordingly. This approach is more akin to training a specialized AI discriminator, where an AI model tries to create something that looks natural and another model tries to distinguish whether the creation is natural or artificial. This process is already in use in generative adversarial networks (GANs).
Standards will continue to evolve as AI proliferates. For now, Google appears focused on content quality rather than separating human contributions from those created with machines.
Stay on top of crypto news, get daily updates in your inbox.
[ad_2]
Source link