COEY Cast Episode 162

Meta’s Muse Spark and the Closed AI Workflow Play

Meta’s Muse Spark and the Closed AI Workflow Play

Meta’s Muse Spark and the Closed AI Workflow Play
  • Riley Reylers

    Riley Reylers

  • Hunter Glasdow

    Hunter Glasdow

Episode Overview

04/13/2026

Meta’s Muse Spark is the clearest sign that AI is shifting from flashy demos into real workflow. The bigger move is not just the model itself, but the way Meta dropped it straight into the apps people already use all day. That makes Muse Spark immediately relevant for marketers and creators working across Facebook, Instagram, Messenger, WhatsApp, and more. The conversation also tracks why Google DeepMind’s Lyria 3 Pro matters for production ready audio, and why Wan 2.7 is getting attention for practical video controls like motion transfer and multi image guidance. The common thread is simple. AI is getting less theatrical and more useful for teams trying to ship content faster with humans still making the calls.

COEY Cast Meta’s Muse Spark and the Closed AI Workflow Play
COEY Cast Meta’s Muse Spark and the Closed AI Workflow Play

Episode Transcript

Hunter: It is Monday, April thirteenth, twenty twenty-six, and if you needed a sign to arrange your thoughts before the robots remix them, apparently it is Scrabble Day. This is COEY Cast, the show assembled by a slightly unhinged chain of AI tools, automations, and machine helpers that absolutely did not ask for union representation. I’m Hunter.

Riley: And I’m Riley. We’ve got the full robo-band in the building today. Synthetic producers, automated notes, probably one model somewhere feeling very proud of itself. If anything gets weird, congrats, that is the art.

Hunter: Honestly, very on brand. Today we’ve got a pretty clean snapshot of where AI is heading for actual work, not just demo theater. Meta’s Muse Spark is the big headline. Then you’ve got Google DeepMind’s Lyria three Pro making AI music feel a lot more production-ready. And Wan two point seven showing up on Joypix with controls that, uh, finally sound like they were made for adults with deadlines.

Riley: That’s the vibe. Less “look at this cursed demo” and more “can I ship this before lunch?” Which, to me, is way more interesting. Also, tiny context check, we already talked this week about open source stacks and video workflows getting more serious. So this episode kinda feels like the closed-platform counterpunch.

Hunter: Yeah, totally. And Muse Spark is the clearest example. Meta launched it as a natively multimodal system, so text, voice, image, and some coding-style tasks all in one. But the bigger story is not just what the model can do. It’s where Meta put it. They dropped it right into the Meta AI app and the web experience, with rollout tied to Facebook, Instagram, Messenger, WhatsApp, and even the glasses.

Riley: Which is such a ruthless move. Like, everybody wants to argue benchmark charts, and Meta’s over here saying, “that’s cute, we own the apps people check every seven seconds.”

Hunter: Exactly. Distribution still eats demos for breakfast. The app reportedly jumped hard in the U.S. App Store right after launch, and the market liked it too. That tells you something. If you already own attention, AI does not need to win as a standalone destination. It just needs to become the invisible helper inside the place people already live.

Riley: Okay but hold up, Hunt, I wanna push on that. Because “distribution wins” is true, but only until the assistant starts giving weird outputs inside your actual workflow. If I’m a marketer and Muse Spark is right there in Meta land, great. But if it’s not dependable, then it’s just a fancy intern sitting in my notifications tab.

Hunter: That’s fair. And that’s the tension. Muse Spark looks operator-ready, not fully infrastructure-ready. You can use it now in product. Broad public API access, not really. So for teams, this is more like a co-pilot inside the platform than a model you cleanly wire into your own stack today.

Riley: Which honestly still matters. If you’re a brand team living in Meta’s ecosystem, image-grounded captioning, ad variants, reply drafting, visual analysis, that is useful right now. It shortens the annoying middle of the work.

Hunter: Right. And that’s where I think companies need to update how they think about AI. The winners may not be the teams with the prettiest benchmark flex. They may be the teams that insert intelligence into workflow first. Not “look what our model scored,” but “can my team go from asset to variation to publish faster with fewer handoffs?”

Riley: Mm-hmm. It’s the old software lesson all over again. Best tech does not automatically win. Best placement wins a lot. Like, history says this constantly. Browser wars, mobile operating systems, social apps, even editing tools. If you become the default place where work starts, you get a crazy advantage.

Hunter: And Meta has that shot. But there’s still a tradeoff. Closed platforms give you convenience and distribution. Open systems give you control. We were just talking on the show the other day about open source getting real for business infrastructure. DeepSeek, OpenClaw, GLM, that whole wave. If your team wants governance, custom routing, auditability, private deployment, closed consumer apps are not the whole answer.

Riley: Yeah. OpenClaw is still giving “very confident intern with root access” energy if you set it up badly, but at least it’s your intern. You can govern it. You can tune it. You can wrap it in approvals. With closed platforms, you’re renting the magic inside somebody else’s mall.

Hunter: That’s the cleanest way to say it. So if you ask me where Muse Spark matters most, it’s for creators and marketers already deep in Meta channels who want speed and proximity. If you’re building a serious cross-platform automation stack, you probably still want optionality.

Riley: Also, the chatter on X was very much “Meta finally shipped a real thing” energy. People were calling it their first major new model in about a year, and the tone was kind of relieved. Like, oh wow, they stopped doing science fair tri-fold board AI and plugged it into the business.

Hunter: Yeah, and that’s not nothing.

Riley: Now let me pivot to the sleeper story because audio people have been living in the future quietly for months. Google DeepMind’s Lyria three Pro is getting a lot of love for sounding way more legit. Better realism, stronger instrumentation, better genre blending, longer tracks. And online the tone is less “cute toy” and more “wait, this is actually usable.”

Hunter: I think this one matters a lot for marketing teams. Music has always been one of those sneaky bottlenecks. You’ve got the video, the voiceover, the copy, and then everybody gets stuck trying to find a track that fits, clears rights, and doesn’t sound like generic corporate uplift number forty-three.

Riley: Sonic wallpaper. The plague.

Hunter: Exactly. So if Lyria three Pro pushes audio quality into production-grade territory, then music becomes a configurable asset class. You can generate a few directions for a product clip, a podcast bed, a brand sting, social sound design, and let humans choose. That is a big workflow shift.

Riley: But let’s not pretend it’s automatically a golden age. Cheap polished audio can also create a flood of perfectly acceptable mush. Like, the internet absolutely does not need more emotionally vacant ukulele optimism.

Hunter: I agree. Lower cost creation does not guarantee stronger creative. It just increases volume. The upside is faster experimentation. The downside is sameness at scale.

Riley: Which means taste becomes the moat. Again. Sorry to everyone hoping the machine would replace having a point of view.

Hunter: And this connects to the bigger ecosystem stuff too. We’ve had all this movement around Veo three opening wider, Adobe leaning into control and brand safety, ElevenLabs pushing more enterprise options, and now Lyria pushing audio quality up. Across the board, the story is not just “models got better.” It’s “more pieces of the media stack are becoming promptable and routable.”

Riley: Ooh, that’s good. Promptable and routable. Put it on a hat.

Hunter: No hats.

Riley: Coward.

Hunter: Fair. But audio still has some real concerns. Watermarking, provenance, rights boundaries, whether vocals feel too generic, whether teams can actually integrate it at scale. Great output in an app is useful. Great output inside a managed content pipeline is a different level.

Riley: Which brings us to Wan two point seven, and honestly this might be my favorite update for practical people. Because the feature list is not trying to seduce me with abstract benchmark poetry. It’s saying multi-image guidance, motion transfer, first-frame and last-frame continuation. That is editor brain candy.

Hunter: Yeah. This is the big shift in video AI. Controllability is starting to matter more than raw realism. If I can guide continuity, preserve character or product identity, transfer motion between concepts, and extend a shot with intent, I can actually use the system in campaign work.

Riley: Thank you. Because I do not need another hyper-real alien catwalk demo. I need to know whether I can make version B of the same product story without rebuilding the entire thing from scratch and praying the AI remembers what shoes my model was wearing.

Hunter: And that’s why this matters for organizations. The next phase of adoption is less “wow, look what it made” and more “can I get a repeatable result twice without losing my mind?” Wan two point seven, at least from the chatter, seems to be getting more respect for steerability than for spectacle.

Riley: People on X were basically saying it looked solid, useful, not necessarily a face-melting leap over everything else, but practical. Which I kinda love. That’s a healthy market signal. Hype is cheap. Repeatability is expensive.

Hunter: Well said. And it lines up with what we’ve been seeing from Seedance two point zero and even the broader Veo conversation. Better workflows are what ship. Not just prettier one-offs.

Riley: So if a team is placing bets right now, where do you put chips? Multimodal assistants like Muse Spark, controllable video like Wan, or audio like Lyria?

Hunter: I’d say it depends on your bottleneck. If your team lives in Meta surfaces and needs faster ideation and adaptation, Muse Spark is immediately relevant. If your pain is production volume and visual iteration, controllable video is the bigger lever. If you’re doing short-form, podcasts, ads, product videos, audio is probably underpriced in terms of impact because it changes polish and pace fast.

Riley: I’d go a little more aggressive on video and audio before betting the house on a single giant assistant. One model to rule the workflow still feels a bit expensive-theater-ish to me. Helpful, yes. Default workhorse for every team? Hmm. Maybe eventually, but right now specialized tools are still landing cleaner punches.

Hunter: I think that’s right. One multimodal workhorse is the direction of travel. But in practice, teams still get better results by combining tools, setting checkpoints, and keeping humans in the loop. That’s the boring answer, which means it’s usually the correct one.

Riley: Ugh, responsible nuance. My enemy.

Hunter: You love it.

Riley: I do, annoyingly. So the meme caption for this whole week is what, “AI is leaving its demo era and applying for an operations job?”

Hunter: That’s good. Mine is, “Cool benchmark, but can it survive a content calendar?”

Riley: Oh, that’s nasty. I like it.

Hunter: Alright, that’s the show. Thanks for hanging with us on this Monday, on glorious Scrabble Day, where apparently the winning word is workflow.

Riley: Thanks, everybody. Go build something weird and useful with your human brain still firmly attached.

Hunter: And make sure you check out COEY.com slash resources for AI news and updates.

Riley: Yep, and subscribe so you don’t miss the next one. Catch you later.

Most Recent Episodes
  • Open Source Vibe Check with VibeVoice and MOSS Audio
    05/01/2026
  • Open Up: Nemotron, LLM jp 4, and Laguna
    04/30/2026
  • OpenAI GPT 5.5 Ships Quietly, Workflows Loudly
    04/28/2026
  • Audio Flamingo Next and the Rise of Specialist AI
    04/19/2026