- Merca2.0
- Posts
- The Hidden Problem with ChatGPT in Marketing
The Hidden Problem with ChatGPT in Marketing
Unveiling the Illusions and Limitations of AI-Driven Marketing Strategies
A few days ago, a good friend proudly showed me a marketing strategy plan to promote his brand, created using artificial intelligence—a combination of ChatGPT and the latest version of Grok. At first glance, the document seemed to include all the components of a proper strategy. It talked about KPIs, timelines, and deliverables, among other key elements. My friend insisted he needed my opinion on how best to take the initial steps of the plan.
My first move was to print it out. The presentation of lists—ChatGPT’s signature style—was daunting; the excessive number of steps hid a dark truth: the plan lacked a coherent structure, both chronologically and in terms of feasibility. Most of the steps were impossible for a fledgling company to follow. For example, it mentioned creating complete channels on Facebook Marketplace but assumed the company already had a well-defined identity for the end consumer. It also took for granted that the brand was already recognized in the market.
Looking at the KPIs, they correctly stated that the goal was to achieve a 20% increase in brand recall or a 15% boost in sales. Again, everything looked good on paper, but none of these KPIs were supported by prior comparisons or an existing baseline.
Artificial intelligence has become a tool that amplifies human intelligence. In this case, the plan was undoubtedly more thorough than what my friend might have written on his own. But the serious issue is that it didn’t take into account his company’s real limitations. This happens for one simple reason: when we use AI, we’re interacting with a rather haughty tool. I use that word deliberately. You see, these kinds of tools struggle to admit what they don’t know; they simply fill in the gaps with whatever they can find in their training repository. So we have to recognize that AI isn’t capable of identifying what it doesn’t know. This double blind spot makes its unsupervised use particularly dangerous.
I’m not trying to discourage its use—quite the opposite. But I must acknowledge that AI tools create content from the little or abundant information provided by the user and then expand it to fill the missing pieces. A good way to understand this limitation is to look at images generated by AI engines. Producing photos with Discord or OpenAI often runs into significant challenges when it comes to depicting human hands, especially with interlaced fingers or people interacting, such as mothers and children. AI is fairly successful at illustrating solo individuals, but combining more than one person dramatically increases the complexity. In other words, the AI understands the outline of what two hands together should look like, but it fails when depicting how two real hands might look strolling in a park.
These tools quickly show less experienced users that any gaps will be filled with fantasies, hallucinations, or information pulled from another user’s similar query. This is a serious matter. The use of AI is one of the most significant developments in marketing since the arrival of digital marketing; however, it has also shown it can flood consumers with inaccurate information at best. That’s why most of these tools include notices recommending human supervision. But as they get better at generating images and text, detecting errors becomes almost impossible. There’s a risk that reviewing the output will take more time than producing it manually in the first place. This has important implications for marketing and shows that AI is most effective when used with extremely clear and strict parameters.
Interestingly, this undermines one of its most valuable capabilities: thinking about things the user hadn’t considered or didn’t know about. Another important effect of improving answer quality is the tool’s use as a search engine replacement for Google. Currently, some AIs include links or icons to indicate their information sources, but often, the answer is a mix of so many references that it’s indistinguishable. This creates a dangerous bias because users can assume everything they receive is accurate. It also affects content comparison: AI typically produces one response that blends multiple sources, often leading to a monolithic solution—a single path to solve a problem.
The lack of diversity in answers or information sources creates an echo chamber even more dangerous than what social media has produced so far. If I write to ChatGPT and get a response, it’s logical to assume it’s correct. But if I ask, for example, “What’s the best restaurant to eat mole in Mexico?”, I get five options—all located in Mexico City, none in Puebla. The tool assumed I was talking about the capital. When I expand the search nationwide, I now get three options in Mexico City, one in Puebla, and one in Oaxaca. Curiously, when widening the geographic scope, only two of the Mexico City options repeat, indicating a contradiction in the results. It’s not surprising that Google Gemini delivers different results depending on whether the question is asked in Mexico City or Puebla. The same inconsistency appears in many categories, from car reviews to appliance evaluations.
This instability in results poses a problem for marketing, as there’s no predictable way to present oneself to the consumer. According to ChatGPT itself, answers can vary based on multiple factors. If the question requires real-time information—such as news, weather, or stock prices—the response changes according to the latest data. In an ongoing conversation, answers may adapt to previous exchanges, but if you start a new chat, the criteria might change. For general or subjective topics, the phrasing might shift slightly for smoothness, though the core content remains. Additionally, if the question involves local businesses, services, or regulations, the answer might be tailored to the user’s location. Finally, although the model aims for consistency, slight variations can occur due to its probabilistic nature.
This clashes with traditional learning systems and the concept of a single source of truth (SSOT). Whether there is a single source of truth depends on context. In data management and business, it’s essential to ensure accuracy and consistency, avoiding errors and duplication. In science, truth must be evidence-based, though it can evolve with new discoveries. In philosophy and society, truth is subjective and depends on perspectives and interpretations; therefore, enforcing a single source can lead to biases. In AI and information systems, a centralized source is useful for structured data, but when it comes to general knowledge, multiple perspectives are preferable.
The central issue is that we’re using AI to answer questions that need structured data, yet from radically different contexts and perspectives. Clearly, this recipe is unstable. We must use AI with extreme responsibility so it truly helps rather than complicates our work in marketing.