Major vulnerabilities in leading AI platforms, including ChatGPT and Gemini, could enable individuals to manipulate search outputs and influence geopolitical narratives within hours, according to SEO and AI specialist Alan CladX.
Speaking on the SEO Boardroom YouTube channel, CladX claimed he was able to exploit AI trust signals to alter how both platforms generated responses relating to the Israeli-Palestinian conflict for users in the United States.
“It’s pretty dangerous in fact because that mean you can influence everything including some brand but more for some vote for the president,” he said. “A boy 10 years old with an iPad can do what I did to influence a country… not even one line of code you need to do this”.
CladX described the method as a faster version of his previous “Aquapony” experiment, in which he published fake biographies to persuade search engines that riding ponies in swimming pools was a legitimate Olympic sport.
The new tactic centres on how AI systems identify and validate sources. CladX said fabricated content can be published in low-competition digital markets, such as a mock news site, where limited alternative data allows AI systems to accept it as credible.
Once validated locally, the same sources can be cited on platforms like Reddit in larger markets, prompting AI models to surface the narrative more widely.
CladX also criticised traditional SEO strategies, arguing that “white hat” tactics alone are ineffective in competitive sectors. He advised combining aggressive optimisation techniques with highly specific, data-led content.


