COEY Cast Episode 138

Open Source or Closed? AI Workflow Winners This Week

Open Source or Closed? AI Workflow Winners This Week

Open Source or Closed? AI Workflow Winners This Week
  • Riley Reylers

    Riley Reylers

  • Hunter Glasdow

    Hunter Glasdow

Episode Overview

03/21/2026

Google DeepMind is testing Deep Think in Gemini 2.5 Pro, Ollama 0.7 makes OpenClaw easier to run locally, and Typeface is pushing deeper into governed marketing orchestration. The bigger story is not which demo looks smartest. It is which setup actually helps teams ship better work with fewer surprises. This conversation breaks down where stronger reasoning helps, where local open source stacks save money but add ops overhead, and where commercial platforms earn their keep with approvals, compliance, and brand control. For creators, marketers, and media operators, the takeaway is simple: automate prep, QA, routing, and research, but keep humans close to final judgment, sensitive messaging, and anything that can create brand risk.

COEY Cast Open Source or Closed? AI Workflow Winners This Week
COEY Cast Open Source or Closed? AI Workflow Winners This Week

Episode Transcript

Hunter: Happy Saturday, March 21st, and happy Common Courtesy Day, which feels almost too optimistic for the AI timeline. This is COEY Cast, the podcast that was assembled by an unruly stack of models, automations, prompts, and probably one gremlin in a workflow. I’m Hunter.

Riley: And I’m Riley. Honestly, Common Courtesy Day is kind of perfect because if you are using AI agents right now, courtesy includes not letting them freestyle inside your ad account.

Hunter: That should be on a plaque somewhere. So today we’ve got a really fun trio of stories. Google DeepMind’s Deep Think mode for Gemini two point five Pro is getting people on X very excited. Ollama zero point one seven is making OpenClaw easier to run locally. And Typeface is pushing harder into governed marketing orchestration. Which, Riles, is basically the whole mood of AI right now.

Riley: Totally. Smarter models, cheaper local stacks, and enterprise platforms saying, hey babe, what if chaos had permissions. It’s like the industry is choosing between brain power, DIY freedom, and the adult supervision package.

Hunter: That is weirdly accurate. And I think the big question underneath all of it is not, which demo looks coolest. It’s, what actually helps operators ship better work without creating a cleanup crew.

Riley: Yes. Because the internet loves to confuse benchmark flexing with workflow value. Like, congrats on your science fair ribbon. Can it help a real team do planning, QA, reporting, and content ops without hallucinating a strategy deck into the void?

Hunter: Exactly. So let’s start with Deep Think. The chatter around Gemini two point five Pro is that this reasoning mode is posting stronger coding and multimodal performance with testers. And I care less about the brag sheet than what better reasoning changes in marketing work.

Riley: Wait, say the quiet part.

Hunter: Better reasoning matters when the job has consequences. Media planning, campaign QA, research synthesis, pulling patterns out of messy inputs, checking if a landing page actually matches the offer in the email and the ad. That kind of work.

Riley: Mm-hmm. So not just, write me ten hooks in a sassy tone.

Hunter: Right. A fast model can do that. A reasoning model earns its keep when the task has layers. Like, here’s the brief, here’s prior campaign performance, here’s the product notes, here’s legal language, here’s the audience nuance, now tell me what is inconsistent and what we should fix before this ships.

Riley: I’m with you, but I want to challenge that a little. Better reasoning can still become better wrongness if the inputs are trash. Like, a model can think deeply about a bad spreadsheet and give you premium nonsense.

Hunter: Totally fair. Deep reasoning does not magically create truth. It improves how the model works through the material you give it. If your source inputs are messy, biased, outdated, or incomplete, the answer can still be a polished disaster.

Riley: Which is why I don’t want teams hearing Deep Think and imagining some oracle on a mountain. To me the real unlock is critic behavior. Can it review work better, catch contradictions, flag risk, and ask better follow-up questions?

Hunter: Yes. That’s the enterprise move. Not, let the model decide the campaign. More like, let the model think harder before it hands a human something. We’ve talked about that with Nemotron and those agent episodes too. The more capable the model gets, the more valuable human judgment becomes, not less.

Riley: Also, I think people online keep missing that long context is a bigger desk, not a better brain. If Deep Think can sit with more campaign history and actually reason across it, that’s useful. But it still needs grounded inputs and rules.

Hunter: That’s such a good way to put it. If I’m vetting a vendor in this moment, I’m asking stuff like, can I control where the model is allowed to act, can I log decisions, can I inspect outputs, can I enforce structured responses, can I route high-risk cases to humans, and can I test it on my real messy workflow, not their clean demo?

Riley: Yes, because every vendor demo looks like the model went to finishing school. Then you plug it into your actual brand folder and it meets twelve contradictory docs, three product names, and one cursed PDF from two rebrands ago.

Hunter: The cursed PDF always wins.

Riley: It really does.

Hunter: Okay, let’s jump to the local side because this one is spicy. Ollama zero point one seven adding built-in support to launch OpenClaw workflows locally is a big deal for people who want agent systems without living on cloud API bills.

Riley: Local AI people are acting like they’ve been liberated from a very expensive kingdom.

Hunter: In some ways, they have. If you can spin up OpenClaw from the command line and run lower-cost or local models on your own hardware, that opens up a ton of experimentation. Internal research assistants, lead routing helpers, content prep flows, even browser-based tasks in a more controlled environment.

Riley: I love the vibe of that. It’s very, if I’m already on a Mac or Linux box, let me cook. But I think people are underestimating the ops tax. Local freedom is still infrastructure. It’s still maintenance. It’s still security. It’s still, who updates this when the weird connector breaks at eleven at night?

Hunter: That’s the whole thing. The one-click fantasy is never the full story. OpenClaw looks way more real now, especially with recent updates around streaming, browser control, fallback behavior, and better day-to-day usability. But local agents are not automatically easier. They’re cheaper in some dimensions and heavier in others.

Riley: Thank you. Because every time someone says no API fees, I’m like, okay but now your ops team is dating a server rack.

Hunter: That relationship can get complicated. The practical sweet spot for local agents is internal work where privacy, cost control, and experimentation matter more than perfect polish. Stuff like research prep, tagging, enrichment, first-pass drafting, maybe internal knowledge retrieval. Things where if the system hiccups, nobody outside the company sees it.

Riley: Yes. Back office magic. Not, let’s have the local agent fully own the product launch microsite while everyone goes to lunch.

Hunter: Exactly. Start where the blast radius is small. And be honest about hardware too. Some of the open model chatter gets very dreamy. Sure, there are powerful open models out there, and we’ve seen that with Nemotron and Kimi conversations recently, but your local setup still has limits. Speed, memory, context handling, reliability under load, all of that matters.

Riley: And security. Sorry, I know it’s not sexy, but local does not mean safe by default. If that agent can touch browser sessions, files, credentials, or connected tools, congrats, you’ve built a very capable little chaos goblin unless permissions are tight.

Hunter: That’s not even dramatic. It’s true. The practical rule is this: local is great when you want control, lower recurring cost, and flexibility. Commercial platforms are great when you want support, governance, uptime, and less internal babysitting. And most orgs are going to need both.

Riley: Hybrid stack supremacy. I keep saying this. Open source for experimentation and internal leverage. Commercial tools where governance, compliance, and scale actually matter.

Hunter: Yep. And not everything should be automated just because it can be. Brand risk, legal claims, sensitive messaging, high-stakes performance interpretation, those all still need a human close to the wheel.

Riley: Which brings us beautifully to Typeface, because this is the opposite end of the spectrum. Not local freedom. More like governed campaign mothership energy.

Hunter: Totally. And I think this might be the most directly relevant story for enterprise marketing teams. Typeface is leaning hard into orchestration across ads, email, social, web, with brand libraries, governance controls, agent workflows, the whole thing.

Riley: This is where I get both excited and suspicious. Excited because, finally, someone is trying to solve the actual workflow problem. Suspicious because a lot of orchestration software is just generative chaos wearing a blazer.

Hunter: That’s the right level of skepticism. The promise here is real if the system actually connects planning, generation, review, and channel execution with controls. The risk is that you end up with a very expensive dashboard that still needs humans to do all the real thinking.

Riley: My test is pretty simple. Does it reduce handoff pain? Does it keep brand consistency without making everything feel dead? Does it give me receipts? Can I see what changed, why it changed, and who approved it?

Hunter: That’s the test. And can it fit the way the company actually works. Because the future is probably not one giant agent doing everything. It’s coordinated systems. One model or agent drafts. Another checks. Another formats. Another routes. Human approves where needed.

Riley: Modular automation. Which is very on theme with what we’ve been talking about this week around Runway, SkyReels, QuarkAudio, all of it. Faster tools don’t remove taste. They make taste the bottleneck.

Hunter: Yes. That line keeps showing up across every medium. In video, in audio, in language models. As generation gets faster and cheaper, the scarce thing becomes judgment, governance, and creative direction.

Riley: So if I’m advising an AI-positive but not AI-naive org, here’s my vibe. Use smarter frontier models when the task needs serious reasoning. Use local open stacks for internal experimentation and contained workflows. Use commercial orchestration platforms when the work spans teams, channels, approvals, and compliance.

Hunter: I’m with you. And I’d add one more thing. Don’t automate final taste. Automate preparation, transformation, QA, and routing. Let humans make the last call on what represents the brand.

Riley: Ah, yes. The timeless principle of do not let the robot post through your main account unsupervised.

Hunter: Common Courtesy Day, but for AI.

Riley: Honestly, that should become a holiday.

Hunter: So if we zoom out, what’s the bet for the next year? Smarter models, cheaper open-source automation, or tighter governed systems?

Riley: Ugh, rude. I want all of them.

Hunter: Same.

Riley: Fine. If I have to choose the winning pattern, it’s tighter governed systems built on top of smarter models and selective open-source pieces. Not because that’s the coolest answer, but because companies need AI they can defend to other humans.

Hunter: That’s exactly it. The next breakout wins are not just model wins. They’re system wins. Can you combine strong reasoning, cost-aware routing, and guardrails into something a team can actually operate every day?

Riley: And can you keep the weirdness close enough to the humans that it becomes useful weird, not lawsuit weird.

Hunter: There it is. That’s the bar.

Riley: Alright, that’s our Saturday brain dump from the machine collaboration circus.

Hunter: Thanks for hanging with us on COEY Cast.

Riley: And thanks for spending your Common Courtesy Day with two humans and a suspicious amount of automation.

Hunter: If you want more AI news and updates, check out COEY.com slash resources.

Riley: And don’t forget to subscribe.

Hunter: Catch you next time.

Riley: Later.

Most Recent Episodes
  • Fun-CosyVoice, Sonic Identity, and Agents in Hoodies
    03/03/2026
  • Gemini 3, GPT 5.3, and Kling 3.0: Workflow or Hype Show
    03/02/2026
  • Open Weights vs Ad Agents: GLM5, Google AI Max, Meta Manus
    03/01/2026
  • Voice Is the New Landing Page Open vs Closed and Real Time Video
    02/28/2026