Building GAME: An Open-Source Adaptive Gamification Engine, Solo, in One Year
The story of how I designed GAME, a plugin-based gamification platform that went from a rough idea to a real deployment.

Most gamification systems are tightly tied to a single application. Their reward logic is usually embedded directly into business code, which makes it hard to reuse, hard to adapt, and even harder to compare across different contexts. That always bothered me.
I wanted something different: a modular engine where incentive strategies could be treated as pluggable components instead of hardcoded features.
Over the course of a year, while doing my PhD, I built that system. It became GAME: an open-source adaptive gamification engine that was later deployed in a real citizen science experiment in Santiago, Chile.
This is not a polished success story with a perfect arc. It is the more honest version: the one with questionable architectural decisions, missing features, production bugs, and the usual gap between “it works on my machine” and “real people are now using it.”
If you have ever thought about building a platform from scratch, or if you are curious about what adaptive gamification looks like under the hood, this is the story.
The Problem That Shouldn’t Exist
What pushed me to build GAME was a simple frustration: gamification engines do exist, but most of them are either closed-source or designed around a single use case.
A lot of gamification platforms are built for one specific context, like fitness, language learning, employee engagement, or customer loyalty. And in many of those systems, the gamification logic is deeply mixed into the application itself. Points, badges, leaderboards, and reward rules are wired directly into the business layer.
That works until you want to reuse the same strategy somewhere else.
At that point, you usually are not reconfiguring anything. You are rewriting it.
I found the academic side equally frustrating. Papers often describe sophisticated adaptive gamification frameworks with elegant diagrams and convincing concepts, but the implementation is either missing, incomplete, or too prototype-oriented to be reused. In many cases, you cannot install the system, inspect it, fork it, or build on top of it.
I wanted to create something more practical: a gamification engine that was agnostic to the domain. A system that would not care whether it was being used in citizen science, e-commerce, education, or something else entirely. A system where the strategy itself, the logic that decides who gets rewarded, when, and how much, could be swapped out as a pluggable component.
That became the core idea behind GAME.
The Spark: What If Gamification Worked More Like Blockly?
The idea really started to take shape when I was looking at Blockly, Google’s block-based visual programming system.
What interested me was not just the visual editor itself, but the underlying principle: you can define logic as modular, composable units instead of baking everything into the surrounding application.
That made me think: what if gamification strategies worked the same way?
Not visually, at least not yet, but structurally. What if a strategy could be defined as an isolated module, configured independently, plugged into an engine, and executed without forcing the rest of the system to change?
In that model, the engine would take care of the plumbing: persistence, scoring records, coins, user state, and API exposure. The strategy would focus only on the reward logic.
That was the seed of GAME, short for Goals And Motivation Engine.
The visual strategy builder is still on the roadmap. But the architecture was designed from the beginning with that direction in mind. The question behind almost every design decision was the same:
Can someone write a new gamification strategy and plug it into the system without having to rewrite the codebase?
Architecture: Why I Chose a Monolith
GAME is a monolith.
I say that directly because I know the instinct today is often to reach for microservices by default. I did not. And honestly, for this project, that was the right call.
I was one person building a research platform under real time constraints. I needed to move fast, keep complexity under control, and avoid spending half my time dealing with inter-service communication, orchestration, tracing, and operational overhead for a system that simply did not need that level of fragmentation yet.
A well-structured monolith with clear boundaries is far better than premature microservices.
This is roughly how I organized the codebase:
app/
├── api/ # HTTP endpoints (FastAPI routes)
├── core/ # Config, dependency injection, DB setup
├── engine/ # Adaptive strategies (plugin system)
├── repository/ # Persistence layer
├── services/ # Business logic
├── model/ # Domain models
└── util/ # Utilities
The most important part is the engine/ directory. That is where the gamification strategies live.
Each strategy is a Python module that implements a defined interface. The engine loads those strategies dynamically. In practice, that means you can create a new .py file, define how the reward logic works, and let GAME handle the rest: persistence, user management, score history, and API exposure.
The rest of the stack is intentionally straightforward:
- Python 3.10+ with FastAPI
- SQLModel + SQLAlchemy
- PostgreSQL
- Keycloak for OAuth2 / OpenID Connect
- Docker / Docker Compose for development
- Kubernetes manifests for deployment
- Poetry for dependency management
Nothing exotic. Nothing there just for the sake of novelty. Every choice was made to reduce operational complexity while keeping the system robust enough to grow.
The Plugin System: Where the Real Idea Lives
The plugin system is the part of GAME I care about the most.
If someone wants to implement a new strategy, the process is intentionally simple.
First, they create a new Python file in the engine/ directory using the existing structure as a template.
Then the strategy receives its inputs. Those inputs can vary depending on the use case. They might be user actions, contextual parameters, geospatial coordinates, timestamps, or data coming from another source. GAME does not force a single domain model for the strategy itself.
From there, the strategy computes a reward decision. That could mean assigning points, giving coins, adjusting scores based on context, or eventually triggering more advanced mechanics.
After that, GAME takes over again. It stores the results, updates user state, records reward history, and exposes the relevant outputs through the API.
That separation is the whole point.
GAME does not need to know the internal logic of the strategy. It only needs a consistent interface. The strategy might be something simple, like a fixed points-per-action model. Or it could be something more contextual, like the proximity-weighted spatial incentive model I used in Santiago. In the future, it could even be an ML-driven policy.
The platform provides the structure. The strategy provides the intelligence.
That is what I mean when I call it adaptive.
Why I Chose Python Over Node
This is one of the questions I get asked most often.
I chose Python over Node.js or Express for a few reasons, but the most important one is that Python gives me a much stronger ecosystem for anything related to algorithms, data processing, analytics, or future machine learning work.
Even though GAME does not currently include ML-based strategies, I designed the architecture with that possibility in mind. If I eventually want to add a model that adjusts incentives based on behavioral data, I would rather do that in the same environment instead of splitting the system between a JavaScript API and a separate Python service.
There is also a more personal reason: I simply work faster in Python.
When you are building solo, the language you think in matters.
FastAPI also turned out to be the right fit. It is fast, clean, async-friendly, and gives you automatic OpenAPI documentation almost for free. The integration with Pydantic and SQLModel helped me reduce a lot of friction between API schemas and domain models, which saved time and prevented a lot of annoying inconsistencies.
What I Knew Was Missing and Shipped Anyway
Like most real projects, GAME was shipped with gaps.
Some of them were obvious from the start.
No WebSockets. GAME is still mostly request-response. There is no real-time push layer yet. If a user earns something, the client has to poll for updates. For the initial deployment, that was acceptable. But for more dynamic scenarios, like live leaderboards or instant notifications, that limitation becomes more visible.
No visual strategy builder. The original inspiration behind GAME pointed toward a more visual, configurable system. That part is still not there. Right now, creating a strategy means writing Python code. That is fine for developers, but it is still a barrier for non-technical users. If there is one feature that could really change GAME from a developer-oriented engine into a broader platform, it is this one.
No built-in analytics layer. GAME stores data, but it does not yet provide a native analytics view of what is happening. There is no built-in dashboard for comparing strategies, tracking engagement patterns, or identifying where participation drops. For now, that analysis has to happen outside the engine. I am still evaluating what the right analytics stack should be for that next step.
I shipped anyway because waiting for completeness would have meant not shipping at all.
At some point, a useful system with known limitations is more valuable than an unfinished ideal sitting in a local branch.
From Localhost to the Real World
GAME was not originally built for a single deployment. It was designed as a general-purpose engine.
But later, after presenting the concept at Universidad Alberto Hurtado in Santiago, Chile, the team there saw a concrete opportunity to use it in a citizen science setting through a platform called GREENCROWD, it’s an open-source crowd-sourcing platform (I will write about that project in a future post).
The use case was straightforward: people would report illegal dumping sites through a mobile application, and GAME would handle the gamification layer behind those reports.
For that deployment, I implemented an adaptive strategy that weighted incentives using spatial context. In simple terms, areas with lower reporting coverage could generate higher rewards, so the system could encourage participants to move beyond their usual routes instead of concentrating activity in the same familiar areas.
Moving from “the API runs locally” to “people are actually using this in the field” involved a lot more than just deploying a backend:
- deploying the infrastructure with Docker Compose
- integrating the API with the GREENCROWD mobile frontend
- implementing a spatially-aware reward strategy
- monitoring runtime issues under real conditions
One of the best decisions I made there was integrating Sentry.
As soon as real users started interacting with the system, things appeared that had not shown up in testing. Network conditions, device differences, unexpected request patterns, edge cases I had not considered, all of that surfaced quickly. Sentry gave me visibility into those failures in a way that made the deployment manageable.
It is easy to feel confident in a system when you are the only one touching it. Production is where confidence gets tested properly.
What I Would Do Differently
If I started GAME again today, I would spend more time upfront defining stricter internal contracts, especially around the plugin interface.
The monolith was still the right choice, but I would formalize the boundaries more aggressively from the beginning. I would make the strategy interface more explicit, validate malformed strategies more carefully, and invest earlier in clearer extension rules.
I would also think harder about the data model from day one.
The current schema works, but it was shaped iteratively as the project evolved. That is not unusual, but a more deliberate modeling phase earlier on would probably have reduced some refactoring later.
That said, I would not change the fundamental stack or the overall direction. Given the constraints I had, solo development, research pressure, limited time, open-source from the beginning, I still think the core decisions were right.
The Part People Usually Miss
When people first look at GAME, they usually notice the FastAPI backend, the plugin structure, or the deployment story.
What they often do not notice is the testing effort behind it.
GAME has near-complete unit test coverage. For a solo research-driven project, that is probably more discipline than most people expect. I also used AI-assisted workflows to speed up part of that testing process, especially for repetitive cases, but the goal was always the same: reduce the risk of silent breakage.
I knew that if this engine was going to sit underneath research experiments, I needed more than intuition. I needed confidence that changing one part of the system would not quietly damage another.
The tests also became a form of documentation. In some parts of the project, they explain expected behavior more clearly than the README does.
What Comes Next
GAME is open source, and the repository is available at github.com/fvergaracl/GAME.
If you are building something that needs a gamification layer, whether that is in citizen science, education, loyalty systems, or another domain, you can fork it, run it, and start implementing your own strategies.
There is still a lot to improve. Real-time communication, a visual strategy builder, better analytics, and more advanced adaptive strategies are all still ahead.
But the core idea has already been tested in the only way that really matters to me: by building it, deploying it, and seeing what happens when it leaves the safety of localhost.
I built GAME because I believed gamification systems should be open, modular, and adaptable.
After one year of building, one real-world deployment, and a lot of lessons in between, I still believe that.