Back to Blog
AI AgentsMay 1, 20268 min read

How a Mobile Game Studio Cut Its Weekly Design Sprint from 7 Days to 1

Ace Games used Mindra across three departments — game design, UA, and art — to run a 6-day pilot. Here's what changed.

9 views
Share:
Gemini Generated Image na8metna8metna8m

How a Mobile Game Studio Cut Its Weekly Design Sprint from 7 Days to 1

Ace Games used Mindra across three departments — game design, UA, and art — to run a 6-day pilot. Here's what changed.


Ace Games is building Clue Chase, a social coin-looter in the vein of Monopoly GO. The game has an aggressive feature roadmap, and the team was moving fast — but running into the same bottlenecks that slow down almost every mobile studio at scale.

Over 6 days, they set up 7 custom AI assistants across 3 departments, built a ~1 GB knowledge base, and ran 506 messages through the system. The results came faster than expected.


The Problem: Three Bottlenecks Eating the Team's Time

Game design was the most expensive in terms of time. Designing a single new event meant a full one-week sprint: reading through the GDD, analyzing competitors (Monopoly GO, Coin Master), reviewing gameplay footage, brainstorming, and writing the design doc. The team had 295 gameplay videos — but manually watching and referencing them was practically impossible. Game designers were spending 60% of their time on research and context-loading, not designing.

UA reporting took half a day every week. Pulling performance data from AppsFlyer dashboards, building comparisons across campaigns, and identifying anomalies was entirely manual. A campaign could underperform for 3–4 days before anyone noticed. Comparing top-spending campaigns required manual pivot tables.

Art trend tracking had no consistent system. Keeping up with AI tools, 2D/3D asset trends, and art direction shifts was done informally — everyone followed different sources, and there was no shared, categorized feed.


The Solution: Department-Specific AI Assistants

Game Design

Setup: The team uploaded their GDD, gameplay video snippets, and design reference docs to Mindra — 337 files, ~1 GB total. A web search agent was added for competitor research, and the game's core design pillars were saved to memory as persistent context.

The prompt that kicked things off:

"I want to design a multiplayer digging event for Clue Chase. Look at the GDD, video snippets, and other coin looter games and design this event."

From a single prompt, Mindra ran 127 semantic searches across 17,000+ chunks in the knowledge base, pulled competitive data on Monopoly GO and Coin Master via web search, and produced a complete 2-page Feature Design Document — including a progression math model, monetization strategy, and push notification plan.

Here's a sample from the output:

The Big Dig — Design Pillars

  • Active over passive — Unlike passive point pooling, every Shovel is actively spent on a shared grid
  • Visible cooperation — Every friend's dig is shown live; avatars fly to the tile they just cleared
  • Detective theming — Artifacts are jeweled magnifying glasses, golden statues, evidence cases — not generic treasure chests
Power-UpEffect
Magnifying GlassInstantly clears a 3×3 area of tiles
UV LampHighlights the exact location of a hidden artifact
Search WarrantClears an entire row or column
ChapterLevelsGridAvg Tiles to CompleteNet Shovels/Level
Ch.1 — Surface Layer1–56×6~40%~12
Ch.2 — Sandy Depths6–107×7~45%~19
Ch.3 — Stone Vault11–158×8~50%~28
Ch.4 — Deep Chamber16–199×9~55%~40
Ch.5 — The Grand Vault2010×10~60%~54

When the game designer pushed back — "5 levels isn't enough" — Mindra recalculated the 4-player throughput math, referenced Monopoly GO's benchmark of 16–25 levels for solo events, and moved to a 20-level / 4-chapter structure. It argued its position with data, not just compliance.

Impact: A 1-week design sprint was completed in 1 day. The game designer can now iterate on 8–10 features per month instead of 4.


UA Manager

Setup: AppsFlyer was connected via three endpoints (installs, in-app events, uninstalls). Web search and Meta Ads Library agents were added. Slack integration was configured for automated weekly report delivery.

The prompt:

"Give me last 3 weeks performance for games.ace.clue. Focus on top 2 spending campaigns and highlight any anomalies, outliers, and best performers."

Mindra called all three AppsFlyer endpoints in parallel, aggregated by campaign, identified top and bottom performers, ran anomaly detection, and produced a full 3-week performance report in Markdown.

A sample from the output:

Anomalies & Outliers

1. Cross-Attribution Overlap (Significant) Multiple installs show Google as primary attribution but Meta as contributor. A Google Search install (Lille, FR) had Meta ROAS Tier1 as an impression contributor the day before. This suggests significant audience overlap between Meta and Google campaigns — both targeting the same Tier1 ROAS pool. Risk of inflated install counts and double-spend on the same users.

2. UK Traffic — Google Display Only All UK installs came exclusively from Google Display. Meta has zero UK installs in the dataset — unusual given Meta's strong UK reach. Either Meta is not targeting UK, or UK is excluded from Meta campaign geo settings. Worth verifying if this is intentional.

Best Performers

CategoryWinnerDetail
Deepest EngagementGoogle Search / Set1User reached af_case_complete_5 — 3-day retention
Fastest OnboardingMeta Set1SSO fired within 16 seconds of install
Best GeoFrance (FR)Dominates installs and in-app events across both campaigns
Cross-Channel SynergyGoogle Search + Meta (FR)Multi-touch paths showing Meta impression → Google Search click conversions

One important moment: when cost-per-install data wasn't available in the raw API response, Mindra didn't hallucinate numbers. It flagged the gap honestly — "The cost field is empty; this is a known AppsFlyer API limitation, it needs to be pulled from the Aggregate Performance Report" — and offered three alternatives.

Impact: A half-day weekly reporting task now takes 5 minutes. Anomalies that took 3–4 days to surface are now caught with instant Slack notifications. Cross-attribution issues that would have been invisible to the naked eye were automatically flagged.


Art / Trend Curation

Setup: X/Twitter was connected via OAuth. Five category labels were saved to memory: Art & Creative, Game Development, Data & Analytics, CEO Office & Strategy, AI Tools & Industry. A scheduled task was configured to run every morning at 9am.

The workflow:

Every morning, Mindra runs parallel searches across all five categories, categorizes and prioritizes the results, and delivers a digest to the team's Slack channel. One prompt to set up; zero effort to maintain.

Impact: 30–45 minutes of daily manual curation was fully eliminated. The entire team gets the same information at the same time, in the same format.


Pilot Results (6 Days)

MetricValue
Active users3 (Game Design, UA, Art)
Custom assistants created7
Knowledge base size~1 GB (337 docs, 17K chunks)
Total messages506
Total tokens2.3M
Departments covered3

One underrated advantage: because all three departments were on the same platform, knowledge was shared across assistants. The art team could see UA data. UA had access to game design context. Everyone was drawing from the same knowledge base.


Why It Worked

The knowledge base actually understood context. After uploading 295 videos and the full GDD, Mindra retrieved genuinely relevant chunks for each query — not random matches. Asking for "progression math for a digging event" returned progression-related content, not generic game design material.

It acted like a coworker, not a chatbot. Mindra didn't just answer questions. It sent emails, posted reports to Slack, ran scheduled tasks, called the AppsFlyer API, and parsed the responses. It took action.

No engineers required. The AppsFlyer integration was set up via OpenAPI in under 10 minutes. The game designer and UA manager each configured their own assistants independently.

It scales across departments. Three departments, seven assistants, one shared knowledge base. Each team connects its own tools; the knowledge stays in one place.


What's Next for Ace Games

  • Full Slack delivery for all reporting
  • Auto-updating UA dashboards via Google Sheets integration
  • New assistants for live ops and community teams
  • A retention and funnel analysis assistant connected directly to their BigQuery instance

Ace Games ran this pilot in 6 days with real production data. No engineering team. No custom integrations. If you're running a mobile game studio and want to see what this looks like for your team, reach out: zeynep@mindra.co

Stay Updated

Get the latest articles on AI orchestration, multi-agent systems, and automation delivered to your inbox.

Mindra Team

Written by

Mindra Team

The team behind Mindra's AI agent orchestration platform.

Related Articles

AI Agents

How AI Agents Actually Think: Planning and Reasoning Strategies That Power Autonomous Workflows

Behind every impressive AI agent demo is a reasoning engine making hundreds of micro-decisions per second. Chain-of-Thought, ReAct, Tree-of-Thoughts, and Plan-and-Execute aren't just academic buzzwords — they're the cognitive blueprints that determine whether your agent confidently completes a ten-step workflow or spins in an infinite loop. Here's a practical breakdown of how modern AI agents plan, reason, and decide.

10 min19
Read
AI Agents

Total Recall: How AI Agents Remember, Reason, and Retain Context Across Long-Running Tasks

A single-turn AI agent is a goldfish. It answers your question, then forgets you ever existed. But the agents that actually change how organisations operate need something far more powerful: persistent, structured, and queryable memory. This is a deep dive into the four memory layers that separate toy agents from production-grade ones — and how Mindra's orchestration layer makes all four work together seamlessly.

12 min36
Read
AI Agents

Agent to Agent: How AI Agents Communicate, Coordinate, and Delegate in a Multi-Agent World

When a single AI agent isn't enough, you need agents that can talk to each other — passing tasks, sharing context, and negotiating outcomes without a human in the loop. Here's a deep dive into the emerging world of agent-to-agent communication: the protocols, the patterns, and the pitfalls that determine whether your multi-agent system hums or implodes.

12 min30
Read