<?xml version="1.0" encoding="utf-8" standalone="yes"?><feed xmlns="http://www.w3.org/2005/Atom"><title>Research</title><link href="https://felipevergara.com/research/"/><link rel="self" href="https://felipevergara.com/research/atom"/><id>https://felipevergara.com/research/</id><updated>2026-03-22T22:26:41Z</updated><entry><title>Research Systems</title><link href="https://felipevergara.com/research/systems/"/><id>https://felipevergara.com/research/systems/</id><updated>2026-03-04T00:00:00Z</updated><summary>Problem Gamification research suffers from two structural problems: systems that produce non-reproducible scoring outcomes and platforms that treat all participants identically regardless of spatial or behavioral context.
Standard gamification frameworks rely on heuristic scoring that varies unpredictably under concurrency. This makes it impossible to isolate the effect of a specific strategy across experiments, undermining the scientific validity of results.
At the same time, static incentive structures fail to adapt to the actual participation landscape — leaving underrepresented geographic areas without targeted incentives, compounding existing data inequalities.</summary></entry><entry><title>Research Methods</title><link href="https://felipevergara.com/research/methods/"/><id>https://felipevergara.com/research/methods/</id><updated>2026-03-04T00:00:00Z</updated><summary>Overview Rigorous evaluation of adaptive gamification systems requires both spatial statistical methods and system-level stress testing. The methods described here form the core evaluation toolkit used across GAME-based research.
The guiding principle is reproducibility: every metric must be computable from a fixed input log and produce identical results across runs.
Spatial Analysis: Getis-Ord Gi* Getis-Ord Gi* (pronounced G-i-star) is a spatial autocorrelation statistic used to identify hot spots and cold spots in geographic participation data.</summary></entry></feed>