When the AI giants remember the human touch

1/6
When the AI giants remember the human touch
Home » When the AI giants remember the human touch

This past week has been quite a ride in the AI world. We’ve seen major product launches, impressive technological leaps, and something I didn’t quite expect – AI companies doubling down on human experiences and analog craft to connect with their audiences. Let me walk you through what’s happening and why I think it matters for all of us working in digital.

Sora 2: The Video Revolution Arrives

OpenAI just released Sora 2, and honestly, it’s pretty disruptive. This isn’t just an incremental update – it’s a significant leap forward in AI video generation. Unlike the first Sora model from early 2024 that could only create silent clips, Sora 2 now generates videos with synchronized dialogue, background noise, and music that match the visuals.

Think about that for a second. You can describe a scene in text, and the AI creates a video complete with realistic physics, proper lighting, and matching audio. The app hit #1 on the US App Store within days of launch, ahead of both ChatGPT and Google’s Gemini. On its first day alone, Sora saw 56,000 downloads despite being invite-only and limited to the US and Canada.

But here’s where it gets interesting (and a bit concerning). Social media has been flooded with AI-generated videos – everything from realistic scenes to completely surreal content. We’re talking about videos that are so convincing, they’re raising serious questions about copyright, deepfakes, and what’s real anymore. OpenAI has had to backtrack on some copyright policies as videos featuring copyrighted characters started appearing everywhere.

The technology is impressive, no doubt. But it’s also a reminder that we’re entering uncharted territory when it comes to content creation and authenticity.

The Irony: AI Companies Go Analog

Here’s what caught me off guard. While OpenAI is launching this cutting-edge AI video tool, they’re marketing it with one of the oldest creative tools in the book: 35mm film.

Their first major ChatGPT brand campaign features real people in real locations, shot on actual film stock. We’re talking about a young guy using ChatGPT to cook a date night meal, someone training for pull-ups with AI guidance, and siblings planning a road trip. These aren’t slick CGI productions – they’re warm, tactile, human stories that happen to involve AI.

The campaign runs across TV, streaming, and outdoor advertising from London’s Piccadilly Lights to Los Angeles billboards, and it’s all about showing the “everyday magic” of ChatGPT in ordinary people’s lives. No futuristic dystopian vibes, no robots taking over – just humans using technology to do more of what they love.

And they’re not alone. Anthropic, the company behind Claude, took this human-first approach even further. They launched their “Keep Thinking” campaign and opened a pop-up at Air Mail’s West Village newsstand in New York City. Over 5,000 people lined up to get free coffee and “thinking” caps (yes, baseball caps with the word “thinking” on them). The activation generated over 10 million impressions on social media.

Social media is flooded with pictures of people wearing free caps from the tech company. (Instagram/@claudeai, Screengrab (X)Social media is flooded with pictures of people wearing free caps from the tech company. )

The irony is beautiful, isn’t it? AI companies are using analog film, physical spaces, real coffee, and human stories to market their digital products. It’s almost like they know that in a world increasingly dominated by screens and algorithms, what people crave is authenticity and tangible experiences.

Claude Sonnet 4.5: The AI That Works for 30 Hours Straight

Speaking of Anthropic, they just released Claude Sonnet 4.5, and the technical leap is significant. They’re calling it “the best coding model in the world,” and the benchmarks seem to back that up.

Here’s what’s fascinating: Claude Sonnet 4.5 can now run autonomously for 30 hours on complex, multi-step tasks. That’s up from just 7 hours with the previous Claude Opus 4 model, released a few months ago. We’re watching AI evolve from being an assistant that needs constant guidance to something closer to a colleague that can take on entire projects.

The model excels at coding, cybersecurity, financial analysis, and even complex legal research. Companies using it report that it can autonomously patch security vulnerabilities, build entire applications (including standing up databases and purchasing domain names), and conduct SOC 2 audits. One researcher watched it code autonomously for the full 30 hours, building a production-ready application from scratch.

This represents a shift from prototypes to production-ready work. For those of us in digital, this changes the conversation from “Can AI help me?” to “What can I delegate to AI so I can focus on higher-level strategic work?”

The Reality Check We All Need

Now, before we all rush to implement AI everything, let me share something that brought me back down to earth. A recent MIT Media Lab report found that 95% of gen AI investments have produced zero returns. Let that sink in for a moment.

It’s not all flowers, though. The hype around AI is real, but so is what the report calls “the AI experimentation trap.” Companies are moving too fast, experimenting without strategy, and failing too quickly. They’re investing in AI solutions without first understanding what problem they’re actually trying to solve.

Here’s the uncomfortable truth: just because the technology is impressive doesn’t mean it’s the right solution for your specific challenge. AI isn’t a magic wand (no abra-cadabra here). It requires thoughtful implementation, clear objectives, and, most importantly, a real problem to solve.

The report suggests that organizations need to:

  • Spend time understanding the real issue before jumping to AI solutions
  • Avoid experimentation for experimentation’s sake
  • Focus on problems where AI can genuinely add value
  • Be patient with implementation and learning curves

This reminds me of the early days of any new technology. Remember when every company needed a mobile app, even if their business didn’t really need one? Or when blockchain was going to solve everything? AI is powerful, but it’s not a universal solution.

What This Means for Us

So what do we take away from all this?

First, the technology is advancing incredibly fast. We’re seeing capabilities that seemed impossible just months ago. Sora 2’s video generation, Claude’s autonomous coding, and other developments are genuinely transformative.

Second, even in the age of AI, human connection matters more than ever. The fact that OpenAI and Anthropic are investing heavily in human-centric marketing tells us something important. People don’t just want powerful technology – they want technology that enhances their humanity, not replaces it.

Third, we need to be strategic. The 95% failure rate on AI investments isn’t because the technology doesn’t work – it’s because companies are implementing it without a clear purpose. Before you bring AI into your workflow or recommend it to clients, ask: What specific problem are we solving? How will we measure success? What happens if this fails?

Finally, balance the hype with reality. Yes, AI can do amazing things. But it’s a tool, not a replacement for human expertise, creativity, and strategic thinking. The companies winning with AI aren’t the ones using it everywhere – they’re the ones using it intentionally, in places where it genuinely adds value.

Looking Ahead

We’re at an interesting inflection point. The technology is getting more capable, the applications are becoming more practical, and the market is maturing beyond the initial hype cycle. But we’re also seeing the consequences of moving too fast – from copyright issues to failed implementations to questions about authenticity.

The companies that will succeed are the ones that remember the lesson both OpenAI and Anthropic are teaching us: technology works best when it enhances human experiences, not when it tries to replace them.

So as we move forward, let’s embrace the possibilities of AI while staying grounded in what actually matters – solving real problems, creating genuine value, and maintaining the human touch that makes our work meaningful.

What are your thoughts? Have you seen AI implementations in your work that are actually delivering value? Or examples of the experimentation trap in action? I’d love to hear your experiences.


Resources: