
COEY Cast Episode 151
Gemma 4 Goes Open While Claude Code Spills Tea
Gemma 4 Goes Open While Claude Code Spills Tea
Episode Overview
04/03/2026
Google DeepMind just made Gemma 4 a real open model story, and that changes the conversation for teams that want more control over cost, privacy, and deployment. The bigger question is what open actually unlocks once the hype wears off. This covers where Gemma 4 could fit into real workflows like document ops, coding copilots, routing, and multilingual content systems. It also gets into the Claude Code source exposure, why boring packaging mistakes can turn into major security problems, and why people seemed weirdly more excited by autonomous agent clues than the leak itself. Tying it all together is the rise of the AI marketer and what automation can really own versus where humans still need to lead.


Episode Transcript
Hunter: It is Friday, April third, twenty twenty-six, and somehow it is both World Party Day and Tell a Lie Day, which is honestly an extremely dangerous combo for the AI internet. This is COEY Cast, the show that was once again assembled by a small army of automations, models, and machine gremlins that absolutely did not unionize overnight. I’m Hunter.
Riley: And I’m Riley. If anything sounds a little too smooth or a little too haunted, that is because the robots are in their experimental era. Which, honestly, same.
Hunter: Same. Alright, today we’ve got a very spicy mix. Google DeepMind drops Gemma 4 as a genuinely open Apache two point zero model family. At the same time, X has been melting down over the Claude Code source exposure. And hovering over all of it is this bigger question of whether we are entering the era of the so-called AI marketer, or just a fresh era of nicer looking chaos.
Riley: Mmm. It is like open models, leaked code, autonomous marketing agents. The group chat this week has been unwell.
Hunter: Let’s start with Gemma 4, because this one actually matters. Not just because Google launched another model, but because this one is open under Apache two point zero. That means commercial use is way less annoying from a licensing standpoint. And in practice, that is huge for teams that actually want to build things.
Riley: Yeah, this is the part where everyone on X starts acting like open model equals freedom, salvation, and six months of runway. And I’m like, okay, breathe. Open weights are not a strategy. They are ingredients.
Hunter: Exactly. A sane company should ask a few boring questions before it throws confetti. Like, do we have the team to host this, monitor it, secure it, and eval it? Do we know which workflows deserve local control and which ones are better off hitting an API? Do we have review layers, or are we just about to self host our own governance headache?
Riley: Thank you. Because the vibe online is very, oh my god, local deployment, Ollama, laptop AI, let’s go. And yes, cool, I love that. But if your brand team still cannot agree on one tone doc, giving them a local model does not magically create operational maturity. It just gives the chaos a private subnet.
Hunter: That should be on a T-shirt. But the appeal is real. Gemma 4 reportedly spans small edge ready variants, then a twenty-six billion Mixture of experts model, and a thirty-one billion dense model. And Google is positioning it for reasoning, coding, multimodal tasks, and agentic workflows. That covers a lot of ground.
Riley: It does, but I want to challenge the hype a little. Do you think the real enterprise value shows up first in actual production wins, or are we about to get a wave of extremely expensive demos with cinematic dashboards and no one using them by May?
Hunter: Hah. Both. But if I had to bet, the first real wins are not sexy. Internal assistants. Retrieval heavy workflows. Coding copilots inside private environments. Document ops. Maybe multilingual content systems because Gemma 4 is getting pitched around broad language support too. Stuff where open licensing plus local deployment actually reduces friction.
Riley: So not, like, one magical super-agent doing branding, media buying, creative strategy, and posting thirst traps for your startup.
Hunter: Not if you want to keep your job, no.
Riley: Rude but fair.
Hunter: What makes Gemma 4 interesting to me is it fits the pattern we’ve been talking about all week. Models are becoming workflow layers, not novelty layers. We were just talking about AI video getting closer to real campaign use with better consistency and cheaper generation. We talked about voice and audio becoming programmable infrastructure, not just demo candy. Gemma 4 is that same shift on the language side. It feels less like, wow, new toy, and more like, okay, this could slot into a real stack.
Riley: Yeah, and it also connects to that whole small-models-in-the-stack idea. Sometimes the move is not one giant frontier model doing everything. Sometimes the move is a smaller or more controllable model doing the repeatable work, and then you escalate the weird stuff to a bigger system or a human.
Hunter: One hundred percent. That hybrid pattern is where teams save money and reduce risk. Let the open model do routing, classification, summarization, maybe structured drafting. Keep humans on taste, approval, and strategic calls. That is how you avoid turning your workflow into a trust fall.
Riley: Okay, but now let’s jump to the other side of the week, because while people were celebrating open models, they were also absolutely feasting on the Claude Code exposure story.
Hunter: Yeah. And the factual core seems pretty clear. A packaging or source map issue exposed a huge amount of Claude Code TypeScript before cleanup. No, it was not model weights. No, it was not training data. But it was still a very public own-goal.
Riley: And X did what X does. Instead of reacting like, wow, security failure, a lot of people reacted like, wait, are you telling me there are hints of more autonomous modes in here? Tell me more.
Hunter: That is the weird part. The security lapse matters, obviously. Teams building agentic systems should take that lesson seriously. Packaging hygiene, release processes, access controls, artifact reviews, all of that. The leak is a reminder that your weakest point may be the boring DevOps stuff, not some movie-style hack.
Riley: Boring infrastructure always comes back like the villain in a sequel. But I agree, the market reaction was revealing. People were almost more excited by the glimpse of internal agent behavior than concerned by the leak itself.
Hunter: Which tells you how hungry the market is for autonomous systems that do more than autocomplete. I think vendors have been careful in public and much more ambitious in private. So the second people sniff delegation, background learning, or more autonomous operation, they go feral.
Riley: Feral is correct. It was giving, this code leak is my new product keynote. And that is a little insane.
Hunter: A little, yes.
Riley: But also, maybe it means the market is tired of copilots. Like, people do not want one more assistant that drafts a paragraph and waits for praise. They want systems that can actually carry work across steps.
Hunter: I think that’s right. The battleground is shifting from assistance to execution. Which is why the marketing angle this week is so interesting. Helena from Enrich Labs gets framed as this autonomous AI marketer. Salesforce is out here pushing agentic marketing. The story is no longer, here is a tool that writes ad copy. The story is, here is a system that monitors competitors, creates campaigns, analyzes performance, remembers tasks, and keeps going.
Riley: Yeah, and every founder read that pitch and said, finally, a marketing department in a browser tab.
Hunter: Right. But this is where we need to be adults for like ten seconds. We are not at replace your marketing team territory. We are at replace fragmented busywork territory. That is different.
Riley: Thank you. Because some of this AI marketer language is absolutely PowerPoint cosplay. If the agent is just spitting drafts, summarizing competitors, and suggesting tests, that is useful, but that is not the same as owning a brand.
Hunter: Exactly. Humans still need to own strategy, taste, accountability, and yes, being the one who gets blamed when the bot gets weird at two in the morning.
Riley: Honestly that last one may be the real job title. Director of Bot Consequences.
Hunter: And trust is the harder problem. Not raw capability. Even if the system can do research, generate assets, and optimize campaigns, most organizations still do not trust it with enough authority to matter. They are happy to let it recommend. They get nervous when it starts shipping.
Riley: Mmm, yes. Companies love autonomy in theory until the agent starts touching budget, audience targeting, live creative, or customer messaging. Then suddenly everybody becomes a governance philosopher.
Hunter: And not always in a bad way. Some caution is healthy. We’ve been saying this with OpenClaw too. Open frameworks, local agents, private deployments, all that stuff can be really powerful for privacy-conscious teams. But you still need approval layers, good briefs, and actual operating discipline.
Riley: Also, not to be dramatic, but this has been the theme across the ecosystem all week. Video is getting better, audio is getting better, voice is getting better, open models are getting better, agents are getting more capable. But none of that removes the need for taste. It just raises the cost of bad process.
Hunter: That is very well put. AI is making the boring middle easier to automate. It is not solving for judgment. If anything, judgment becomes more valuable because the systems can now produce so much volume, so quickly.
Riley: So if I am a marketer listening to this and I’m deciding between an open ecosystem and a closed, cautious one, what’s your move?
Hunter: I’d say open earlier than most people are comfortable with, but not blindly. Use open models where control, privacy, or cost predictability matter. Keep strong evals. Keep fallback paths. Keep humans in the loop on brand and risk. The teams that win will be flexible, not reckless.
Riley: Yeah. Don’t pretend X is writing your roadmap, but also don’t pretend the market is not telling you what it wants. People want more control, more execution, and less lock in. They just also want fewer AI surprise attacks.
Hunter: That’s the sweet spot. Open enough to adapt. Structured enough to trust.
Riley: Cute. Put it on a tote bag.
Hunter: Alright, that’s our Friday pulse check. Gemma 4 says open models are getting more deployable. The Claude Code mess says your packaging process can become front page news. And the whole agentic marketing wave says the next fight is not over who can generate content. It is over who can actually orchestrate work without creating automated sludge.
Riley: Mmm. And on World Party Day, maybe celebrate by not letting your autonomous ad agent start a party in the wrong audience segment.
Hunter: Please. Keep the party permissioned.
Riley: Thank you for hanging with us on COEY Cast.
Hunter: And check out COEY.com slash resources for AI news and updates.
Riley: And subscribe, because the robots crave engagement almost as much as they crave task delegation.
Hunter: Catch you later.
Riley: Later.




