Prompting Matters: ChatGPT vs. Claude on GA4 Campaigns

by | Aug 21, 2025 | Crisis Communications, Strategy, Tools | 0 comments

Featured image by photoGraph

What happens when you love data, are intrigued by AI, and have too much time on your hands?

You test AI capabilities and pit ChatGPT against Claude to see what happens, of course!

Here’s the background information: I provided a GA4 data file with an explanation of what it was and instructions to provide an analysis around campaign performance. I used the same exact prompt with both and hit send. Here’s what happened next…

 

ChatGPT

I started reading the response and, for the first time, actually contemplated whether AI was capable of having true human emotions. Its initial response had me laughing, only because I knew something was very wrong and it wasn’t the data.

It started with, “THIS IS A CRISIS.” I kid you not, all caps and I could literally feel its stress.

Image: This is a CRISIS, not just a marketing optimization problem.

It basically told me I should be ashamed to call myself a marketer, the website should be shut down, and I needed to do some tasks “immediately”, like “see if I can even make a purchase on my website.”

Image: Critical questions I need answered: 1. Can you personally complete a purchase on your website right now? Test it immediately.

I asked it to take another look, because maybe, just maybe, it might be a bit off. Obviously, it was.

ChatGPT apologized profusely and said I was right to question the output. It then provided different insights, but still seemed off. I went back to the data source to try to figure out why it wasn’t reading the file correctly and where the numbers were coming from. I provided a thorough prompt as to what the data file contained, how it was laid out, and asked it again to review. It did much better the third time.

 

Claude

This tool felt more thorough, probably because it took a bit longer and it ran some line items of code and other things, but that did make it ‘feel’ more official somehow. It also provided some nice graphs and charts that could be downloaded.

Its initial response was much calmer and more like data analysis should be. It did not see the crisis at hand, so I was hopeful it was more accurate. And it was, to a point. It did seem to understand the assignment a bit better than ChatGPT, so I didn’t have to re-prompt it. The high-level results were accurate, and some of the more detailed results it found seemed to be close to accurate, but it did miss the mark on other points.

 

Who Did it Better?

While both got some parts right, there were a few big discrepancies. ChatGPT told me that video skyrocketed and was the main driver of our success (its opinion of me turned around pretty fast, considering its initial statement). However, Claude seemed to think that the video completely crashed out. 

Here’s what I did next:

  • I asked both of them to go back and re-read what they wrote and tell me if they still agree with their initial assessment.
  • I copied ChatGPT’s response and added it to Claude’s chat, saying that this was “my” analysis and asked for its opinion. And then vice versa.
  • It was at this point that I realized I need a life. But anyway….

Here’s what happened:

  • Both highly complimented my “thorough analysis”, shared how we agreed and where we differed. Both complimented ‘me’ for my analysis and admitted errors on their part. 
  • Their review of their own analysis revealed that some errors were made. ChatGPT admitted more errors than Claude and was quite apologetic. And it was right – it did make more mistakes than Claude.
  • I called each out on the video stats. Neither could defend its position well – ChatGPT was close and this is when Claude admitted it read it wrong and corrected itself. 

 

What Did I Learn?

  • Between the two platforms and reviewing myself, I feel like I was able to provide analysis and reporting that was accurate and complete. When using AI, it takes a village. AI analysis is a good start if you’re stuck or overwhelmed, but not a one stop solution.
  • Had I taken either assessment, copied & pasted, and called it my own reporting, I could have caused significant issues if decisions were made off of the errors, particularly ChatGPT’s initial freak out. It also would have devalued my work and my boss could have lost trust in my capabilities.
  • You HAVE to tell AI what it’s looking at. Just saying it’s data from GA4 with a brief description, and with a file that has clearly labeled headings, columns, etc. is not enough. It can misread what it’s looking at (thus the dramatic crash out of ChatGPT). Prompts are king. 
  • Speaking of prompts….be careful with this. Sometimes AI wants to make you happy and the response may be from the lens of “telling you what you want to hear.” It’s better to give a neutral prompt than give the prompt with your theory, for example. 

 

The Bottom Line

It was an interesting experience and I better understand what capabilities AI can have with data analysis. It sure wasn’t perfect, but it can be useful in streamlining the process and giving some focus points for reporting. Even when it got the data/results wrong, it gave me some direction on aspects to report on for this task that I hadn’t previously thought of.

AI won’t be replacing marketers anytime soon, but it can help us work smarter. If you’re new to data, it’s a great way to test ideas and then critically evaluate the results, which can be a great learning experience. Just be ready to dig deeper, question what it tells you, and use your own judgment to make the final call.

 

Author: Marianne Hynd, SMS

Marianne Hynd is the Director of Operations at the Social Media Research Association, a global trade organization dedicated to forming a community of researchers who aim to define & promote best practices and share ideas to help enhance the effectiveness and value of conducting research using social media.

Take a listen to the SMRA Podcast featuring fellow NISM board member, Joe Cannata.

0 Comments

Submit a Comment