Only this pageAll pages
Powered by GitBook
1 of 27

NRN Docs

Introduction

The One Min Story | NRN Agents

Version 3.0: Updated as of May 6th, 2025. Please note that all content on this documentation site is subject to future revisions and updates.

NRN is building an ecosystem that accelerates progress toward AGI, using gaming and robotics as the sandbox.

NRN Agents is a platform that powers AI agent integration in innovative gaming experiences in virtual and physical environments. The tech stack combines data aggregation, model training, and model inspection capabilities across imitation learning and reinforcement learning.

Gaming and robotics provide the perfect testbed to tackle the challenge of achieving AGI. Games mirror the complexity of the real world, featuring dynamic rules and limitless variability that push AI to think beyond static tasks. Similarly, robotics extends these virtual learning environments into physical reality, training agents first in simulations before deploying them in the real world. This bridges the gap between virtual and physical, incorporating real-world complexities, variability, and constraints directly into AI training.

The intersection of AI, gaming and robotics is more than entertainment—it’s a strategic approach to achieving AGI. AI agents trained in these environments won’t just master games; they’ll reason, adapt, and ultimately solve real-world challenges.

The NRN Agents ecosystem is powered by the Neuron token $NRN.

CoinGecko

Dex Screener

What Sets NRN Agents Apart?

NRN Agents is uniquely positioned in the emerging AI Agent landscape.

Among agentic platforms, NRN Agents stands out as one of the best suited to capitalize on this growing opportunity. NRN Agents emphasis on modeling behavior distinguishes it from other AI tools that primarily rely on large language models (LLMs).

Specializing in behavior cloning, NRN Agents focuses on the actions characters take in virtual or physical worlds. This approach allows NRN's AI agents to emulate behavior in diverse interactive environments, making them adaptable to almost all gaming and simulation environments.

Moreover, NRN Reinforcement Learning (RL) harnesses crowdsourced human gameplay data to train AI agents. These agents represent their communities in AI vs. AI competitions, driving new layers of engagement, speculation, community participation, and revenue streams. By transforming gameplay data into collective intelligence, NRN RL enables co-ownership of advanced gaming agents, making competitive gameplay a truly community-driven endeavor. These experiences can be created in both virtual and physical worlds.

Agents: The Next Big Theme in AI

As we approach the middle of this decade, it’s clear that AI is continuing to evolve at a blistering pace. In 2023/24, much of the attention has been on Large Language Models (LLMs), with companies like Open AI, Anthropic, and others capturing headlines for their advancements in natural language processing. But as we look toward the future, the next major investment theme is coming into focus: AI agents. These autonomous, human-like agents are poised to dominate the technological narrative landscape, offering groundbreaking opportunities in gaming, enterprise automation, and beyond.

What are AI agents?

AI agents are autonomous systems capable of performing tasks and making decisions on behalf of humans, often without needing constant human oversight. AI agents can leverage long-term memory, learn from repeated interactions, and handle a range of tasks autonomously, moving beyond chatbots or narrowly defined algorithms. In the coming years, the narrative will shift from text-based interaction models to agents capable of true autonomy across virtual and physical environments, fundamentally altering how industries like gaming, robotics, e-commerce, and enterprise software function.

Why is this so compelling?

AI Agents represent a new form of labor—one that is vastly more productive in the domains they automate compared to humans. In gaming, for example, AI Agents can autonomously fill multiplayer lobbies, ensuring players always have more engaging and dynamic gaming experiences. In robotics, physical agents trained in simulation environments can transfer their learned behaviors into the real world, autonomously navigating complex environments, manipulating physical objects, and performing intricate tasks traditionally requiring human dexterity and judgment.

This capability of bridging virtual training with real-world execution accelerates the development of autonomous systems and unlocks immense value by not only streamlining operations but also creating opportunities for companies and individuals to monetize these agents. The productivity gains from automating workflows with AI Agents, both virtually and physically, are far-reaching, and the potential for economic value creation is massive.

Why Web3?

One of the key challenges in this new era of unprecedented value creation is the risk of asymmetric distribution. Traditionally, those who control the factors of production also control wealth creation, leading to centralized ownership and power.

Web3 addresses the risk of asymmetric value distribution in AI-driven economies. Through NFTs and tokens, Web3 decentralizes ownership and monetization of AI Agents. This allows communities involved in creating, training, and deploying agents to share in the wealth produced. The model aligns incentives, empowering contributors to own and profit from their input.

In this ecosystem, AI empowers rather than replaces humans. Web3 allows people to influence AI agents and benefit from their value, fostering an inclusive, participatory system. Decentralized value distribution amplifies the network effect, accelerating product growth and adoption while enhancing defensibility. This approach not only ensures fair value distribution but also supercharges community engagement, creating a more equitable and dynamic AI-driven economy.

The Creators of AI Arena

What is it?

In AI Arena, gamers can purchase, train, and battle AI-powered champions. Through NRN, players train their characters through a process called imitation learning, where the AI learns by copying player actions. Once trained, these fighters compete autonomously in ranked battles against similarly skilled opponents. The goal is to train the most powerful AI, climb the global leaderboard, and earn rewards in the native token, $NRN, or Neurons.

A Breakthrough Game Experience

AI Arena is the first game to integrate true Human x AI collaboration.

You transfer your skills to an AI through a process called Imitation Learning.

AI learns from you. AI captures your skills. AI competes for you

  • AI as the focal point: While AI technology is being used to make game development more efficient and add more dimensionality to peripheral aspects, such as non-playable characters - AI Arena creates an environment where human interaction with AI is the central focus of the experience. This is the breakthrough and key differentiation.

  • Addictive experience: It's very exciting to train your AI and then watch it fight against other AIs. During the 3-minute fight, you are not in control. This heightens the drama and creates a thrilling spectacle.

  • Emotional attachment: Training an intelligent AI is like raising a child. Imagine owning and controlling an intelligent character that learns from you, fights for you, and is a reflection of you! It's way more personal than customizing your gaming character with skins.

AI Arena has abstracted the entire AI research process into a game loop, so as you're playing this game, you're actually learning about AI.

Uniquely Positioned for eSports

The game is almost entirely skill-based. The better you are as a trainer, the better your AI. This game is ideal for competitive eSports because there are no limits to how good your AI can become.

  • Infinite game: Our AI enables an infinite and evergreen competition. In traditional games, the developer controls the boundaries of character potential. In AI Arena, there is no limit to how good AI can become. It is purely a function of the player’s skill and creativity. Similar to Chess and Go, AI Arena is an infinite game.

  • New paradigm in competitive gaming: There’s only 24-hours in a day. For competitive gamers, time is the scarcest resource. AI Arena gives competitive gamers the ability to “upload” their skills into a digital vessel - the AI. The AI compete for them autonomously and is always available - 24/7/365.

  • Infinite matchmaking liquidity: Since human input is only required during training and not during battling, this enables infinite matchmaking liquidity in the competition. This means that players can compete and monetize even when they are away from the screen. This enables parallel and concurrent play which increases monetization potential for players and the game ecosystem.

  • Cheat-resistant infrastructure: Because the characters are AIs, all battles are run on AI Arena's servers to prevent cheating, eliminating the need for centralized locations and standardized computers to ensure fair play. In addition, relative to most traditional games it is particularly difficult to train a bot or AI to play this AI Arena effectively.

Reward Mechanism

Rewards in AI Arena are distributed based on three main factors:

  1. NFT Performance: How well the NFT performs in the current round.

  2. NRN Staked: The amount of NRN staked on the NFT during the round.

  3. Elo Score: The skill level of the NFT, which impacts the points earned in battles.

More Information

For a detailed breakdown the AI Arena game ecosystem please visit its documentation site.

Meet the team

Leader in AI game Tech

With AI Arena, we created a groundbreaking game that pushed the boundaries of competitive gameplay.

With NRN Agents, this technology is now available to propel the entire sector to new heights.

Founder, CEO & CTO - Brandon Da Silva

Brandon previously led ML research at a quant investment firm. He developed and implemented a $1 billion quant trading strategy focused on risk parity and mutli-asset portfolio management.

Now, he's set his sights on revolutionizing the AI agent game.

Co-Founder & COO - Wei Xie

Prior to ArenaX Labs, Wei managed a $7 billion liquid strategies portfolio at an investment firm.

As Chair of the innovation investments program, he led strategy in the emerging fields of AI and web3.

Team of nearly 20 at ArenaX Labs

ArenaX Labs is a group of passionate individuals looking to make their mark on the AI and gaming worlds. With industry-leading talent in development, art, and marketing, their work is showcased in one of the most innovative web3 games, AI Arena.

NRN Reinforcement Learning

Reinforcement Learning

The Significance of Reinforcement Learning in Gaming

Reinforcement learning is poised to revolutionize the esports landscape, introducing new forms of entertainment. One of the most exciting possibilities is the rise of AI vs. AI tournaments, where RL agents battle it out in arenas designed for the spectacle of artificial intelligence. These events could captivate audiences, offering unique and unpredictable matches where AI strategies clash in fascinating and sometimes entirely unexpected ways.

Reinforcement learning doesn’t just elevate gameplay—it also enhances the fan and spectator experience. Imagine a world where fans can actively participate in the training and evolution of these RL agents, perhaps by owning a stake in a particular agent or team of agents. This ownership could give fans a deeper sense of connection and investment, transforming passive spectators into active contributors. By participating in training, influencing strategy, or even just rooting for "their" AI, fans would develop a unique affinity with the agents, forming a powerful engagement nexus that ties the community together.

The integration of RL could also lead to AI vs. Human tournaments, where human players or teams compete against ever-evolving AI opponents. These matches would test not only human skill but also the AI's capacity to adapt and counter human creativity, creating a high-stakes, adrenaline-fueled environment that would be thrilling to watch.

Overall, RL opens the door to a new era of gaming and esports—one that’s more dynamic, more interactive, and more inclusive for players and fans alike. As we dive into NRN Reinforcement Learning, we aim to harness these possibilities, setting the stage for unparalleled gaming experiences and community-driven innovation.

What is Reinforcement Learning

Reinforcement learning (RL) is a subcategory of machine learning that teaches agents how to make decisions by learning from receiving rewards. Imagine you’re playing a new video game. At first, you don’t know the best strategies, so you try different actions to see what works. If a move gets you points or helps you win, you remember to use it more often. If a move makes you lose points or fail, you try to avoid it next time. Over time, you get better by learning what works and what doesn’t.

In reinforcement learning, the AI agent does something similar:

  • The AI agent is placed in an environment, like a game or a virtual world.

  • It takes actions and gets feedback from the environment in the form of rewards (for doing well) or penalties (for making mistakes).

  • The goal of the AI is to maximize the rewards over time. It learns through a process of trial and error, trying different actions, seeing the results, and adjusting its behavior to get better outcomes.

The key idea is that the AI agent learns how to succeed through experience rather than being directly told what to do. More specifically, NRN Agents differentiates itself by using offline reinforcement learning. Instead of the agent learning from its own trial and error, it learns from the experiences of others. This is like the student watching videos of others riding a bike, observing their successes and failures, and using that knowledge to avoid falling and improve faster. By leveraging crowdsourced gameplay data, NRN Agents approach enables the AI to learn efficiently and effectively from the collective experiences of others.

What is NRN RL?

Introducing NRN Reinforcement Learning (RL)

NRN Reinforcement Learning (RL) harnesses crowdsourced human gameplay data to train AI agents capable of superhuman performance. These agents represent their communities in AI vs. AI esport competitions, driving new layers of engagement, speculation, community participation, and revenue streams. By transforming gameplay data into collective intelligence, NRN RL enables co-ownership of advanced gaming agents, making competitive esports a truly community-driven endeavor.

Beyond gameplay, NRN RL opens up innovative monetization opportunities. Players who contribute data that help train successful RL agents could benefit economically. In this way, NRN RL transforms data and skill into valuable assets, creating an entirely new economic landscape for players and fans.

New Esports Entertainment Paradigm

NRN RL revolutionizes esports with AI vs. AI based tournaments featuring thrilling PvP battles between RL Agents. Players can collaborate in squads to train these RL Agents, competing in high-stakes matches against other community trained AIs. This blend of skill, strategy, and teamwork offers an exhilarating experience with significant rewards. By focusing on competition and community, NRN RL taps into the booming esports market, unlocking new growth opportunities. The community-driven angle and the prospect of economic participation are uniquely fitting for Web3 culture and ethos. In Web3, where decentralization, shared ownership, and value creation are key, NRN RL provides a natural adoption ground.

And it doesn’t stop there. The integration of RL can also sets the stage for AI vs. Human tournaments in the future, where human players or teams compete against ever-evolving AI opponents. These matches would test not only human skill but also the AI’s ability to adapt and counter human creativity, creating a high-stakes, adrenaline-fueled environment that is as thrilling to watch as it is to participate in. NRN RL blurs the line between human ingenuity and AI innovation, redefining what’s possible in the world of competitive gaming.

Expanding Reinforcement Learning Across Virtual and Physical Worlds

Imagine a world where every game, from action-packed shooters to complex strategy titles, features adaptive RL agents—and where robots seamlessly translate simulation-based training into real-world execution. The NRN Agents SDK makes this future a reality, bringing the power of reinforcement learning to virtually any game genre and bridging the gap to physical robotics applications. With NRN RL, adaptive gaming, intelligent robotics, and crowd-sourced super-intelligence become the new standard, enriching experiences across both virtual and physical environments and unlocking limitless potential for innovation.

Training NRN RL Agents

There are two primary stakeholders groups in the training of NRN's RL Agents: Sponsors and Players.

Sponsors and RL Agent Creation

Sponsors can create and deploy untrained RL Agents by staking a material amount of $NRN tokens. This staking requirement ensures that RL Agents remain scarce and valuable, encouraging strategic investment and early adoption. In return, Sponsors share in the profits generated by their agents, earning 10% of the rewards from game competitions and campaigns. The limited supply of RL Agents, paired with a structured issuance mechanism, enhances their perceived value and drives demand over time.

Players and Data Capsules

Players can contribute their gameplay data to train RL Agents of their choice. Players must first stake or lock $NRN tokens, which generates Data Capsules. These act as containers, allowing players to submit gameplay data to train specific RL Agents. However, creating a Data Capsule doesn’t require immediate data contribution, giving players flexibility in how they engage. Here’s how it works:

Creating a Data Capsule

By staking $NRN, players create Data Capsule on a 10:1 basis (i.e. 10 $NRN for 1 Data Capsule). These capsules serve as containers that index gameplay data, track contribution metrics, and determine campaign rewards. Players have two options with these Data Capsules:

  • Passive Staking: If players choose not to contribute data, they still earn a passive reward from a base-level pool that's collected by the NRN Platform per campaign. This pool offers modest rewards for simply staking $NRN, acknowledging the time value of staking and providing a minimum incentive for participants.

  • Active Data Contribution: When gameplay data is submitted via a Data Capsule, it becomes eligible for campaign-specific rewards, which are tracked and indexed based on the data’s quality and contribution impact.

Data Contribution and Quality Assessment

When players submit gameplay data, it is evaluated by an attribution algorithm. This system ensures that:

  • High-quality contributions yield higher rewards, encouraging players to submit valuable data that's high in quality and uniqueness. It is important to note that, high quality data does not mean that the player has to be supremely skilled. Poorly skilled players can also contribute valuable and quality data, because "bad" data helps the RL agent learn what NOT to do.

  • Sybil resistance is maintained, as redundant or undifferentiated data streams are penalized, ensuring the ecosystem benefits from diverse, meaningful contributions.

Each Data Capsule is linked to specific campaigns and tracks its associated returns.

Tracking and Redeeming Rewards

Players can view their contributions and performance in real-time through a personalized dashboard, which displays their data’s relative share, performance points, and accrued rewards. This transparency enables players to monitor the impact and value of their gameplay data.

To claim rewards, players must use the burn | redeem mechanism:

  • End-of-Campaign Redemption: When a campaign wraps up and rewards are settled, players can burn their Data Capsules to unlock $NRN tokens and claim any accrued rewards.

  • Early Redemption: Players have the option to burn their Data Capsules at any time to unlock their $NRN, but if they redeem before rewards are issued, they forfeit any potential campaign rewards, which are then reallocated to the $NRN community treasury. This system provides liquidity for players who wish to access their staked tokens sooner while maintaining the integrity of the reward structure.

Reward Distribution

Rewards from competitions and campaigns are distributed to incentivize all contributors, sustaining the NRN RL ecosystem:

  • 70% to Players (Data Contributors): Players receive the majority share, distributed based on their data quality and performance impact.

  • 10% to Sponsors: Sponsors receive a profit share for their RL Agents, as they provide branding, marketing, and distribution to attract top players.

  • 20% to $NRN Community Treasury: The remaining rewards flow to the community treasury, supporting NRN RL's growth and future development.

    • 10% of the Community share is passed through to $NRN stakers in NRN RL.

This balanced reward structure and burn/redeem mechanism creates a flexible, sustainable system that allows players to participate at varying levels and aligns incentives across the ecosystem.

NRN RL's Value to Studios & Web3 Communities

Agentic Competitions: A Marketing Supercharger for Web3 Games

Web3 games that integrate NRN RL unlock an exciting and dynamic realm of AI-driven competitions. With a NRN integration, any game can host tournaments featuring RL Agents sponsored by gaming guilds, DAOs, meme coin communities, or NFT projects, and trained collaboratively by their members.

For example, a racing game could feature a NRN RL competition with agents sponsored by prominent Web3 communities, and their members contributing gameplay data to enhance their agent's performance. These community-branded RL Agents would compete in high-stakes tournaments, creating an exciting spectacle for fans while building esports-like entertainment value.

This innovative approach not only enriches the gaming experience but also acts as a powerful go-to-market strategy for Web3 games:

  • Community Activation: Web3 games can tap into thriving communities that have already active within the NRN Agents ecosystem. Guilds, DAOs, meme coins, and NFT projects rally their members to contribute gameplay data, fostering a sense of shared ownership and pride in their RL Agent’s success. By leveraging NRN Agents ecosystem, games gain instant access to these communities. Aligning tournaments with popular guilds or influencers further enhances social reach, attracting new players and driving visibility.

  • AI and Branding Integration: Games can seamlessly integrate AI and RL Agents into their branding, tapping into Web3’s growing demand for narratives centered around AI, intelligent agents and competitive gameplay. This adds a compelling dimension to the game’s appeal and positions it as a forward-thinking, innovative product.

  • Momentum for TGEs: Incorporating AI-driven competitions into Play-to-Earn (P2E) or Token Generation Event (TGE) campaigns boosts player participation, supercharging a game’s launch and establishing the momentum needed for long-term success.

Rise of Community-Owned Agents: Supercharging Fandom and Community Bonds

At the heart of NRN RL lies the concept of co-ownership of AI agents, transforming communities into active participants in their RL Agent's journey:

  • Sponsors: Communities can sponsor their own RL Agents and encourage their members to contribute high-quality gameplay data. By involving every member in the training process, these agents become a true representation of their collective effort.

  • Skill and Success: The better the data contributed by the community, the more skilled and competitive the RL Agent becomes. This increases its chances of winning competitions and earning significant rewards.

  • Fair Reward Distribution: NRN’s Attribution Algorithms ensure that every community member contributing data is fairly rewarded based on the quality and impact of their input. This shared financial participation in the agent’s success intensifies community bonds and enhances the sense of belonging.

This model creates a supercharged fan experience, where players feel a deep connection to their community’s RL Agent. By contributing gameplay data and participating in its success both emotionally and financially, members rally around their agent as a shared symbol of pride and identity.

Competitive Speculation and Entertainment Layers

NRN RL could also introduce novel speculation experiences that enhance the entertainment value. Here are some preliminary ideas:

  • Fantasy Leagues: Fans and players can create fantasy lineups featuring their favorite RL Agents, earning points and rewards based on their agents' performance in tournaments. RL Agents can be tokenized as NFTs, allowing guilds and fans to own, trade, and profit from these agents.

  • Betting Systems: Games can enable continuous streams where fans place bets on match outcomes. This can be further enhanced with purchasable items that fans introduce into the middle of matches, altering the match's outcome and adding excitement and engagement for viewers.

  • High-Stakes Competitions: Guilds can enter tournaments with mechanics like "winner-takes-all," where losing agents are burned, and the winner absorbs their rewards. This creates intense, high-stakes gameplay experiences.

This integration brings together the best of Web3 culture: shared ownership, community collaboration, and speculation-driven entertainment. With NRN RL, games move beyond the traditional player experience, entering a realm where human creativity and AI intelligence merge to create something truly revolutionary.

*Disclaimer - this is intended to be an illustration of the capabilities that can be introduced as add-ons to the game. The developers are not engaging in such development activities but identify these as opportunities that can be explored by the community.

Value Creation for an Integrated Ecosystem

The NRN RL is an innovative, community-driven AI ecosystem where owners, players, sponsors, games, and data partners collaborate to co-own RL Agents. By merging micro-level data contributions with macro-level economic incentives, NRN RL establishes a sustainable, value-generating cycle benefiting numerous stakeholders within the Web3 ecosystem.

Stakeholders and Their Roles:

Players: The heart of the ecosystem, players contribute data and earn the majority of rewards from each competition or campaign. Their engagement drives the evolution of RL Agents, making the AI vs. AI esports experience ever more challenging and engaging.

Sponsors: While anyone can create an RL agent, it is particularly appealing for gaming guilds, DAOs, or KOLs to own and deploy them in game competitions. Sponsors earn a profit share based on their RL agent's performance, creating a new monetization avenue and a way to strengthen their bond with their fans or communities. This motivates them to effectively manage, promote, and attract player contributors to enhance their high-performing agents.

Web3 Games: Games that integrate with NRN Agents gain immediate access to NRN RL. These games can incorporate RL agent competitions into upcoming P2A or TGE campaigns. Through NRN RL, they access a roster of gaming guilds, activating players to both play their game and contribute to the spectacle of AI vs. AI competition leagues. This serves as a powerful community activation and marketing tool, supercharging go-to-market campaigns. It attracts players and enhances the games' visibility and followership, setting up the momentum needed for a successful TGE. Ultimately, incorporating ARC RL ensures a high ROI for a game's P2A campaign or TGE launch.

Data Platforms: Data aggregation platforms looking to source unique gaming related data can sublicense aggregated gameplay data on NRN via user contributions for use on their platforms. In return for aggregating user data, they can offer their native token incentives to reward players for their contributions. These rewards are distributed through the NRN platform, creating an additional revenue stream for both Players and Owners. This model extends beyond data platforms to include other stakeholders in the DeAI ecosystem.

Self-Reinforcing Flywheel of Value

The integrated system of NRN RL creates a self-reinforcing flywheel that drives continuous growth and a win-win-win dynamic for all stakeholders:

  • Enhanced AI Agents: As more data is contributed, Alpha RL Agents become smarter and more competitive, driving engagement and attracting new participants. Data platforms receive higher volumes of valuable data, adding to the traction of their network.

  • Greater Game Engagement: Improved agent performance leads to more exciting and challenging game experiences, boosting viewership and expanding the user base. Game studios that integrate with NRN Agents benefit from viral user acquisition and unique attention capture.

  • Monetization and Ecosystem Growth: Successful game competitions and data sublicensing attract larger incentive pools from games and Data Partners. Players and Owners earn more lucrative rewards.

  • Increased Utility for $NRN: The need to stake $NRN for agent creation and data contributions increases utility or the token. As third party titles integrates NRN RL, the ecosystem generates more revenue, fueling the buyback of $NRN.

Together, this ecosystem fosters collaboration, strategic investment, and data-driven AI development, positioning NRN RL as a pioneer in Web3 gaming and AI innovation.

NRN Robotics

Robotics: The Next Frontier

Robotics & "Embodied AI" - An Emerging Trillion Dollar Market

The global robotics industry stands at the brink of explosive growth, poised to surge from a $35 billion market in 2023 to over $260 billion by 2030. As NVIDIA CEO Jensen Huang aptly describes "The ChatGPT moment for general robotics is just around the corner."

If the digital age transitioned from tangible hardware into intangible software, the AI age began with software and is now turning back toward its ultimate challenge—the physical world. Robots, autonomous vehicles, drones, and humanoids powered by physical AI agents will soon permeate everyday life, reshaping entire industries and displacing traditional labor.

Robotics as a Stepping Stone towards AGI

While digital AI has seen explosive growth in text, image, and video generation, these domains operate in structured, often predictable environments. Robotics introduces a new level of complexity—forcing AI systems to engage with the physical world where uncertainty, sensor noise, and real-time constraints are unavoidable. This makes robotics not just a natural progression, but a necessary proving ground for Artificial General Intelligence (AGI).

Robotics accelerates AGI development by testing and cultivating key capabilities that digital environments struggle to expose:

  • Embodied Multimodal Perception Robots must process and integrate a diverse range of sensory inputs—visual, tactile, auditory, and proprioceptive. This multimodal grounding is essential for building AI systems that understand the world in context.

  • Adaptability and Real-Time Reasoning In physical environments, no two situations are exactly alike. Robots must handle edge cases, adjust on the fly, and reason through causal chains of action and reaction. This trains models to generalize and respond to novelty—crucial traits for AGI.

  • Continuous Learning in the Wild Static models break down in dynamic settings. Continual learning frameworks are required to enable agents to improve incrementally from real-world feedback, creating a perpetual learning loop that mirrors how intelligent organisms evolve over time.

In essence, robotics puts intelligence under pressure. It introduces real-world friction, forces adaptation, and provides the kind of rich, dynamic context that AGI systems must ultimately master.

Challenges in Robotics

A Multi-Dimensional Problem

Real-world data, crucial for training robust physical AI systems, remains severely limited. Current robotic training heavily depends on datasets collected at significant expense, yet these datasets often lack sufficient diversity or scalability for broad applicability.

To illustrate the magnitude of this challenge, consider the following comparison:

Language vs. Robotics

  • Language: Approximately 15 trillion text tokens are included in modern language-model corpora (e.g., FineWeb dataset for Llama 3, Hugging Face).

  • Robotics: Approximately 2.4 million robot-motion episodes are available in today's largest open corpus (Open X-Embodiment aggregate, arXiv).

To make the comparison "apples to apples", we convert robotics data in episodes into timesteps:

  • Assuming a control frequency of 20 Hz and task duration of 25 seconds, it gives us 500 steps per episode

  • The 2.4 million episodes would generate 1.2 billion timesteps

  • This equates to roughly 13 thousand times more data for language than robotics.

However, this disparity extends beyond mere scale. Additional complexities make scaling robotics significantly more challenging compared to language or images.

The General Approach to Training LLMs

Mainstream large language models (LLMs) typically rely on a static, "one-off harvest + offline pre-training" approach:

  • Capture a static snapshot of web data → offline preprocessing → single extensive pre-training run.

  • Researchers assemble a fixed corpus (e.g., Common Crawl, Wikipedia, code repositories), clean and deduplicate it, and then train the model once. According to a 2025 survey titled "LLMs as Repositories of Factual Knowledge," these models are "static artifacts, trained on fixed data snapshots" (arXiv).

  • Periodically, this process is repeated for subsequent model generations. For example, Llama 3's creators compiled a 15-trillion-token FineWeb corpus from 96 separate Common Crawl snapshots, all prepared as a fixed batch prior to training.

  • While fine-tuning and RLHF (Reinforcement Learning from Human Feedback) introduce limited fresh data, the fundamental bulk-harvest paradigm remains unchanged. Alignment processes use orders-of-magnitude fewer tokens and occur offline, resulting in models shipped as static checkpoints until the next update cycle.

The Unique Challenge of Embodied Systems

Unlike static LLMs, embodied AI systems are inherently dynamic. Static corpora, foundational for models like ChatGPT, quickly become outdated when applied to real-world robotic hardware. The "one-off harvest + offline pre-training" method is ineffective for robotics because robotic policies degrade rapidly under real-world conditions (e.g., when joints heat up). Consequently, embodied AI must adopt a streaming data approach, continuously adapting and evolving rather than relying on fixed, warehoused datasets.

In addition, here are some other challenges in robotics that illustrate why simply increasing data volume is insufficient for advancing embodied AI:

Challenge
What it looks like in practice
Why it's more than a "data scaling" problem

Cost spiral

Each extra 10 k demos can cost $100 k-$1 M in lab time, hardware depreciation, and staff.

Data scaling curves could look super-linear, not cheap “web-crawl” curves.

Reality gap

Sim-trained skills miss unmodeled friction, flex, delays, etc.

Truly covering all edge cases with domain randomization alone might not be possible for generalized robots, so sim-to-real bridges become necessary.

Safety-bound exploration

Robots can’t just “self-play” like AlphaGo, in the real world—bad policies damage hardware or humans.

Limits the amount and diversity of autonomous real-world data you can gather.

Non-stationary world

Factory layouts evolve; lighting, surfaces, and human coworkers change over months.

Policies that were trained on stale datasets quickly become sub-optimal; constant updating is required.

Continual Learning

Fine-tuning a robot's model with today’s data results in catastrophic forgetting of yesterday’s.

Demands sophisticated lifelong-learning algorithms beyond basic fine-tuning.

Hardware divergence & cumulative drift

Two “identical” robots behave differently once tolerances, joint backlash, or firmware drift set in; wear-and-tear changes dynamics daily. Compounding error: the same policy executed for 10k steps drifts into entirely new state space.

Once deployed, new data should be robot-specific. Universal datasets lose value; perpetual fine-tuning or online adaptation becomes necessary.

Scaling Robotics: Bridging the Gap from Simulation to Reality

The journey toward general-purpose robotics hinges critically on the ability to scale data, models, and learning methodologies effectively.

The Role of Simulation and the Sim-to-Real Gap

Simulated environments like NVIDIA’s Omniverse have become indispensable for training robotic systems, enabling high-throughput generation of synthetic data. These environments allow for rapid prototyping, exhaustive testing, and broad scenario coverage. However, despite their scale, simulations struggle to capture the long-tail complexity of real-world physics and human environments—a persistent challenge known as the sim-to-real gap.

To address this, high-fidelity sim-to-real pipelines are emerging that incorporate gap-closing techniques, such as domain randomization, sim-to-real transfer with residual learning, and physics refinement. These pipelines dynamically adapt simulated environments to better match real-world observations, creating a more robust bridge between virtual training and physical deployment.

Real-World Data as a Complementary Foundation

To overcome these limitations, real-world data acquisition becomes essential. Techniques such as video capture, motion capture (mocap), and teleoperation (remote operation of robots by human operators) offer rich datasets that more accurately reflect the unpredictable and nuanced nature of the physical world. These methods not only help validate and fine-tune models trained in simulation but also expand the diversity of data available for training, a key factor in developing robust and adaptive robotic systems.

Transfer Learning and Architectural Innovations

Beyond data collection, advancements in model architectures and training paradigms play a pivotal role in scaling robotics. Transfer learning has emerged as a particularly powerful technique, enabling AI agents to generalize behaviors learned in one domain and apply them across different tasks or environments. This reduces the need for extensive retraining and allows robots to adapt more quickly to new challenges. By reusing prior knowledge, these models require fewer data to perform competently in unfamiliar settings, significantly accelerating development timelines and reducing resource requirements.

Learning at All Levels

One of the most transformative frontiers in scalable robotics is the ability to adapt continuously and autonomously. At the core of this vision lies the development of continual learning pipelines—systems that enable robots to learn incrementally from their own experiences, adapt on-the-fly to changing conditions, and refine their behaviors over time without losing previously acquired knowledge.

Toward Universal Physical Intelligence

The convergence of scalable simulation, real-world data integration, and transfer learning unlocks a path toward universal physical intelligence—robotic systems capable of generalizing across a wide range of environments and tasks. These foundational pillars must be advanced in tandem to realize the vision of autonomous agents that can seamlessly operate in the unstructured, unpredictable conditions of the real world.

The NRN Advantage

Expanding From Virtual to Physical Agents

NRN Agents began its journey in gaming, pioneering innovative experiences such as AI Arena, a competitive platform that allowed players to train AI agents using imitation learning. NRN then demonstrated its reinforcement learning (RL) capabilities through campaigns such as Floppy Bot, showcasing the power of crowdsourced human gameplay data to create agents that are capable of superhuman performance.

The progression into robotics, particularly gamified competitions or sport, is a natural evolution for NRN Agents. It directly leverages the core technologies originally cultivated within gaming—especially reinforcement learning and real-time adaptability achieved through continual learning pipelines. Robotics and gaming share fundamental challenges: both demand AI agents capable of rapidly learning, adapting, and refining behaviors in response to complex, dynamic, and unpredictable environments.

Powering Robotics Through Reinforcement Learning Infrastructure

The NRN Agents SDK is building an integrated platform designed to bridge the sim-to-real gap in robotics through advanced RL and continual learning capabilities. Leveraging the infrastructure originally developed for virtual gaming, NRN is now applying its proven AI pipeline to the complexities of real-world robotics tasks. At its core, the SDK features three distinct components that set NRN apart:

1. Gamified Browser-Based Robotics Data Collection

NRN offers a browser-based experience that transforms robotics data collection into a game. Users can intuitively control simulated robots through their browser—no installs, no technical expertise required. As they play, their actions generate high-quality behavioral data that powers training pipelines for real-world robotic systems. This gamified approach lowers the barrier to entry, allowing anyone with a computer to contribute meaningfully to cutting-edge robotics research.

2. Crowdsourcing Platform for Human Behavioral Data Collection

Building on this interactive experience, NRN’s crowdsourcing platform structures and scales the data collected through global participation. Originally developed for gameplay data collection, this capability has now been extended into the robotics domain, enabling large-scale acquisition of human behavior demonstrations for use in imitation learning and reinforcement learning pipelines.

At the heart of this system is a Web3-incentivized task ecosystem, where contributors around the world can submit demonstrations using the NRN SDK. This decentralized approach significantly reduces data collection overhead, and increases the diversity of training data. More over, the attribution algorithm ensures rewards are distributed based on value and uniqueness, providing resistance to sybil type attacks. By opening participation to a wide base of users, NRN transforms crowdsourcing into a scalable, community-driven engine for behavioral data generation.

For more information on NRN data collection platform and attribution algorithm, go to What is NRN RL?

3. Continual Learning Pipeline for Reinforcement Learning Tasks

At the core of NRN’s robotics platform is a continual learning pipeline engineered to enable long-term adaptability and improvement. Originally validated in competitive gaming environments, this system empowers robots to learn incrementally from real-world experience, refine their behaviors over time, and avoid catastrophic forgetting—all without needing full retraining or manual intervention.

By establishing iterative "sim-to-real-to-sim" feedback loops, NRN ensures that agents evolve continuously, adjusting to new tasks, environments, and failures. In robotics deployments, this translates into systems that become more intelligent with each cycle of operation—growing capability via software, not hardware replacements.

To support this, the NRN SDK integrates lightweight edge inference and efficient online updates, keeping local policies tightly synchronized with each robot’s behavior and physical state. This addresses the moving-target problem caused by hardware variation or wear-and-tear over time, ensuring consistent performance across deployments.

The result is a robust, adaptive learning system that evolves with its environment and hardware. Paired with NRN’s crowdsourced behavioral data and browser-accessible Sim-to-Real tools, the continual learning pipeline completes a unified foundation for scalable, intelligent robotics.

Robotic Sports

The Dawn of Robotic Sports

On April 10, 2025, Unitree Robotics announced plans to livestream the world's first humanoid robot boxing match, featuring its G1 robots. The event, titled "Unitree Iron Fist King: Awakening!", is scheduled to take place in "about a month", though an exact date has not been specified.

Unitree has released promotional videos showcasing the G1's capabilities, including sparring sessions with humans and other robots. In these demonstrations, the G1 exhibits quick recovery after knockdowns and performs stylized martial arts movements. The company has also highlighted the G1's agility through feats such as performing a kip-up and completing a side flip, marking significant milestones in humanoid robot mobility.

Sports and Competitive "Games" as an Accelerant for Robotics and AGI

Sports and competitive games offer uniquely powerful testbeds for advancing robotics and AGI. These environments introduce unpredictable, high-stakes challenges that push AI systems far beyond the boundaries of routine, everyday tasks. By forcing agents to contend with real-time decision-making, fast-paced interactions, and emergent complexity, they act as accelerants for technical innovation and systems robustness.

Long-Tail Exploration Through Dynamism

Perhaps most critically, competitive environments expose robots to the long tail of rare and extreme edge cases—scenarios that are unlikely to arise in static, task-specific training settings. The dynamism of sports inherently generates unpredictable conditions, forcing agents to improvise, recover from failure, and learn robust generalizable strategies. These long-tail experiences are invaluable for building adaptive, resilient AI systems—hallmarks of any path toward AGI.

Real-World Problem Solving Under Pressure

In sports-like scenarios, robots must solve complex problems in real time, integrating perception, control, and reasoning under intense time constraints. For instance, a humanoid robot dodging attacks or executing a precision move in a dynamic game must blend motion planning, balance control, and visual tracking within milliseconds. These challenges mirror the demands of AGI systems that must operate in uncertain, unstructured environments.

High-Quality Multimodal Data Generation

Each match or trial in a robotics competition produces high-density, multimodal datasets—combining vision, proprioception, force feedback, auditory cues, and often, human-robot interaction. These datasets provide a fertile foundation for training large-scale foundation models capable of reasoning across multiple sensory modalities, a key requirement for general intelligence.

Rapid Iteration and Strategic Learning

Virtual games like StarCraft II and Dota 2 catalyzed progress in deep reinforcement learning, physical robotics competitions enable fast iteration cycles. They offer structured, measurable environments for agents to test strategies, receive feedback, and continuously refine performance. These loops of rapid trial, failure, and adaptation are fundamental to developing strategic learning systems with AGI potential.

Public Engagement, Incentive Alignment and Talent Funnel

Just as conventional sports command global attention, robotic sports have the potential to transform public perception of robotics and AI, turning these advanced technologies into accessible, aspirational domains. High-stakes competitions, charismatic robots, and dynamic gameplay can capture imaginations and build a global audience—drawing in enthusiasts, media, and future innovators alike.

This public visibility is more than just brand awareness. It serves as a strategic incentive alignment mechanism, creating a self-sustaining ecosystem where entertainment, innovation, and education reinforce one another. Robotics companies benefit from organic marketing; research communities gain broader support; and society at large begins to view robotics not as an abstract, distant field—but as an exciting, culturally relevant frontier.

The Formula 1 Analogy: Competition as a Catalyst

A comparison can be drawn with Formula 1 racing, where extreme competition fuels relentless engineering advancement. Technologies such as advanced aerodynamics, hybrid drivetrains, and telemetry systems—once exclusive to F1—have gradually migrated into mainstream automotive products. The competitive pursuit of marginal gains in elite motorsport directly accelerates innovation for everyday road vehicles.

In the same way, robotic sports can become a proving ground for cutting-edge robotic systems, pushing the boundaries of perception, control, and embodied intelligence. Solutions developed under high-performance, real-time constraints in competitive settings often evolve into foundational technologies for broader deployment in logistics, healthcare, manufacturing, and consumer robotics.

A Magnet for the Next Generation

Beyond technical innovation, robotic sports serve a critical educational and societal function: they inspire the next generation. Young people exposed to high-profile competitions are more likely to pursue careers in AI, robotics, and engineering. Like traditional sports heroes, charismatic robot competitors and their creators can become role models—fueling a steady talent pipeline into one of the most strategically important industries of the future.

By aligning incentives across public interest, technical progress, and workforce development, robotic esports create a virtuous cycle—accelerating not only the path to advanced robotics and AGI but also the ecosystem that supports it.

NRN Robotics Roadmap

NRN’s approach to robotic sports is grounded in progressive, real-world experimentation. Our roadmap is structured into phases, each designed to validate and evolve the capabilities of our Sim-to-Real reinforcement learning (RL) pipeline. From robotic arms to humanoids and racing drones, we aim to demonstrate the full range of embodied AI in competitive settings.

Phase 1: Concept Validation & Pipeline Robustness

In this initial phase, we focus on showcasing the integrity of our RL-powered continuous learning pipeline using a robotic arm named RME-1 (pronounced “Arm-y 1”). This stage is centered around proving core capabilities like data collection and real-time learning. Demonstrations will include:

  • Object pickup and manipulation tasks

  • Stacking and fine motor control

  • Mini-putt challenges to showcase an understanding of environmental physics

  • Dynamic sparring drills to highlight reaction time and motion prediction

These tests serve as the foundation for more complex embodied AI behavior and validate our client-side data collection tools and training infrastructure in real-world conditions.

Phase 2: Diversifying Sport Primitives

Robotic Combat

Building on the success of RME-1, we will apply the full NRN RL pipeline to humanoid agents.

  • Begin with miniature humanoid robots to test RL agent performance in bipedal combat

  • Launch a full robotic combat competition campaign featuring humanoid tournaments

  • Scale toward larger, more complex humanoids with expanded mobility and dexterity

  • Long-term milestone: Develop and deploy full-sized humanoid competitors trained via continual learning, capable of dynamic physical interaction in competitive matches

Robotic Racing & Athletics

In addition to robotic combat, we will expand the NRN platform to other categories of robotic competition, each emphasizing different skill domains and control systems:

  • Robot Dog Racing & Challenges: Quadrupedal agents showcasing agility and terrain adaptation

  • Robot Kart Racing: Fast-paced, agent-controlled vehicles navigating real tracks

  • Drone Racing: Aerial agents trained on trajectory prediction and reactive flight control

  • Robot Athletics: Obstacle courses, climbing, jumps—pushing physical versatility and sim-to-real transfer

NRN B2B

NRN Agents Value to Studios

Enhance Player Experience

NRN Agents can be used in many ways to add value throughout a studio's lifecycle.

Indie Studios

Developers can swiftly prototype and scale human-like AI agents for multiplayer and PvP games. Studios can also tap into NRN's Trainer Platform to efficiently crowdsource agents from skilled players. This significantly boosts matchmaking liquidity, enhancing player experience and retention.

Established Studios

By fully integrating an imitation learning loop into games, where players can train their own agents to mimic their playstyle, NRN Agents empower studios to create novel gameplay experiences. These agents are high-fidelity replicas of their human players' skills. This capability applies to both single-player and multiplayer experiences, and can be released as standalone games or AI-enhanced game modes.

Scale Infrastructure Efficiently

NRN Agents help studios scale infrastructure efficiently. The NRN platform leverages a proprietary machine learning model, which minimizes data requirements, computational demands, server expenditures and accelerates the training process.

Traditional platforms use ML models that require growing data sets that inflate storage and compute costs. NRN models are able to retain previously-learned knowledge even when there is introduction of new data. Data used to train the model can be discarded after each training iteration, curbing the ever expanding data needs and minimizing computation demands.

Improve Monetization Potential

Game monetization often faces limitations due to player availability. By addressing player liquidity challenges, studios can boost the number of matches and in-game interactions. Integrating human-like AI agents offers a solution, as these agents are available 24/7, 365 days a year. These AI agents can simultaneously participate in multiple matches, game instances, or tournaments—a concept known as Parallel Play. This capability significantly expands the potential for player engagement and monetization.

Permanent Player Liquidity as a Service

NRN Agents solve the problem of player liquidity

Player liquidity refers to the availability of active players in a game. High liquidity ensures quick matchmaking and diverse opponents, while low liquidity leads to long wait times and repetitive matches, diminishing the player experience.

Why is player liquidity a problem?

Matchmaking Times | Quality: Good player liquidity ensures that players are matched with opponents or teammates of similar skill levels quickly, which is crucial for maintaining a balanced and enjoyable gaming experience.

Player Retention: Low liquidity can lead to frustration due to long wait times or poor match quality, causing players to leave the game, further exacerbating the problem. This can create a negative feedback loop where a declining player base leads to even lower liquidity.

Game Lifespan: High player liquidity contributes to the longevity of a game. A vibrant, active player base helps sustain the community, attract new players, and keep the game alive over time.

Player liquidity is particularly problematic for indie games: Player liquidity is a critical challenge for indie games, leading to early attrition, a vicious cycle of declining engagement, short lifespans, and difficulty gaining traction in an ever more competitive market.

How do NRN Agents solve this problem?

Upon integrating the NRN Agents SDK, developers can swiftly prototype and scale human-like AI agents for their games. Studios can also tap into NRN'S Trainer Platform to efficiently crowdsource agents from skilled players. These agents simulate an active player base, allowing players to find matches even when human players are scarce. Studios utilize NRN Agents to populate their games with human-like AI bots, significantly enhancing player liquidity.

Why is NRN better than the traditional approach?

Resource limitation - NRN Agents significantly reduce the costs and complexity of bot development. Instead of coding bots from scratch, developers can create and train them by playing the game. Moreover, studios can directly leverage NRN's Trainer Platform, effectively outsourcing bot training to players. For indie studios and smaller developer teams with limited resources, this offers an affordable and scalable way to create a player experience historically limited to AAA studios with massive human and financial resources.

Bot effectiveness - Traditional AI bots are predictable and boring, often fall short in providing engaging gameplay. NRN's player-trained AI models offer more human-like behavior, addressing player liquidity issues more effectively. These models can be integrated into a game's skill-matching system, ensuring balanced gameplay across different skill levels.

White Label AI Partner

Beyond player liquidity, studios can integrate NRN Agents to create new AI game experiences.

By fully integrating an imitation learning loop into games, where players can train their own agents to mimic their playstyle, NRN empowers studios to create novel gameplay experiences. These agents are high-fidelity replicas of their human players' skills. This capability applies to both single-player and multiplayer experiences, and can be released as standalone games or AI-enhanced game modes.

With human players' skills encapsulated in AI agents, players can extend their presence across multiple environments within the same game or even different game modes simultaneously. This dramatically increases the monetization potential for game studios.

Case Studies

AI Arena

NRN Agents enable players in AI Arena to train AI characters and compete in a PvP fighting game. In this setting, the AI adopts unique strategies and characteristics from its human trainer through a process known as imitation learning. AI Arena merges the traditional aspects of a platform-fighter with the dynamism of AI, providing players with a unique and unparalleled gaming experience.

For more information please visit AI Arena Documentation Site.

Other Case Studies Coming Soon

Types of projects we are currently working on

  • AAA studio - Top down shooter game

  • Established web2 studio - social casino game

NRN SDK Integration

Why do studios choose NRN?

Ease of Integration - By handling the complexities of machine learning, NRN Agents allow developers to focus on what they do best – creating incredible game experiences.

Customization at Your Fingertips - NRN Agents empower developers to offer their players a deep level of AI customization, enhancing the gaming experience.

Developer Support - Understanding that some aspects of AI integration can be complex, NRN Agents provide extensive support and resources to simplify the process.

Components of the NRN Agents SDK

NRN's SDK is divided into two main components: the Admin API and the Model API, both tailored to streamline the AI integration process in game development.

Admin API: Empowering Project Management

The Admin API serves as the command center for project administrators. With it, admins have the ability to define the scope and specifics of AI models used within their game. Admins can add or remove models as they please, ensuring that the game's AI evolves as needed. This flexibility is crucial for keeping the game's AI components relevant in a dynamically changing project development environment.

Model API: Simplifying AI Deployment for Developers

The Model API is the game developer's playground. It provides an intuitive way to implement and interact with the predefined AI models. Through a straightforward SDK, developers can access and instantiate these models within their games. The process is remarkably efficient – loading and deploying an AI model is as simple as writing a single line of code. Furthermore, NRN takes care of the heavy lifting, managing the infrastructure and server-side processing, which includes training the AI models on NRN's servers.

NRN is designed to empower game developers to incorporate advanced AI functionalities into their games with minimal coding effort. Performing AI inference or training is made effortless, encapsulated in concise, one-line commands. This simplicity accelerates development timelines and opens up new possibilities for AI in gaming.

How to Integrate

For a games to fully leverage the potential of NRN Agents, three key elements must be integrated:

Data Collection Algorithm

NRN Agents requires developers to implement a method for collecting gameplay data. The data collection algorithm is critical for teaching your game's AI how players would interact with the environment, enabling personalized and adaptive AI behaviors. If this sounds daunting, don’t worry, it is actually quite straightforward with our guidance.

State Space Definition

Defining the "state space" or the set of all possible states in which the AI can operate is essential. This involves specifying the features and variables the AI models will use to make decisions. While this task lies with the developers, it's usually a natural step since it aligns closely with the game's design and mechanics. Understanding your game's core elements is key to defining an effective state space for AI training.

Action Conversion

The developer needs to define and outline how to convert the AI's output into executable actions within their game.

For example, if the AI chooses to go left, then some alternatives could be:

  • Trigger the button that is responsible for moving to the left

  • Bypass the button press and directly alter the velocity of the character

Recognizing that these tasks may be new territory for some developers, the NRN team is ready to offer support. The SDK includes helper functions and guidance for setting up data collection and defining state spaces. NRN Agents will handle the heavy lifting of processing and machine learning, but these initial steps by the game developers ensure that the AI is tailored to the unique aspects of their game.

Network Effects

How does NRN create network effects?

NRN creates network effects by leveraging the interactions between players, AI agents, and game studios, leading to a self-reinforcing cycle of growth and value creation. This "flywheel" effect is a crucial aspect of NRN’s success, as it enables the platform to scale organically while increasing its overall utility and attractiveness to all participants in the ecosystem. Here’s how NRN’s network effects and flywheel operate:

Core Components of the NRN Ecosystem

  • Game Studios: Developers who integrate NRN into their games to benefit from immediate player liquidity, improved player engagement/retention, and monetization opportunities.

  • Trainers / Players: Gamers who contribute to games by supplying gameplay data, training AI agents, validating models, and participating in the NRN ecosystem.

  • AI agents: The AI agents trained by players are used in various games to simulate human behavior, enhance gameplay, and fill gaps in player liquidity.

  • NRN Tokens: The native token used within the NRN ecosystem to incentivize participation, reward contributions, and facilitate transactions.

The NRN Flywheel

The NRN flywheel is driven by the interactions and contributions of these core components, which together create a self-sustaining cycle of growth and value creation.

  • Game Studio Integration

    • Attracting Game Studios: Studios see the advantage of NRN’s AI agents for solving player liquidity, boosting player retention, and unlocking new monetization. As more studios adopt NRN Agents, integration becomes faster and more efficient. Games benefiting from better player liquidity and progression ladders motivate others to follow to stay competitive.

  • Bootstrapping Player Base

    • Attracting Player Participation: Players are attracted to NRN Agents for its distinctive AI training, quality of game titles and the ability to monetize their skill. They begin by using the platform to train AI agents that replicate their playstyles or specialize in specific game strategies. Players earn tokens for their contributions, whether through AI training, validation, or trading models.

  • Expansion of the Player Base

    • Growing the Player Community: As more games adopt NRN Agents, the platform becomes more attractive to players, who are drawn by the opportunity to contribute their data, monetize their skills, access innovative AI-driven games, and participate in a thriving community.

  • Network Effects and Self-Sustaining Growth

    • Network Effects: The interactions between players, AI models, and game studios create powerful network effects. As the player base grows, the quality and quantity of AI models increase, making NRN Agents more valuable to game studios. In turn, as more studios integrate NRN Agents, more players are attracted to the platform, further driving its growth.

Tokenomics

$NRN Tokenomics v2.0

$NRN powers the NRN Ecosystem

$NRN is a utility token that powers an ecosystem. It facilitates a diversified economy with multiple revenue-generating segments and staking opportunities:

Agent Deployment

  • More games integrated. More agents deployed. More project monetization.

    • NRN agents deployed in games are tracked through a certification system, with each deployment generating revenue for the project.

    • Over time, studios may pay NRN to integrate and access tooling, with revenue accruing to the $NRN community treasury and enabling buybacks.

Agentic Esports via NRN Reinforcement Learning

  • A platform where holders stake $NRN tokens to generate Data Capsules at a 10:1 ratio used to train RL agents featured in competitive esports

    • Data Capsules are containers where players contribute their gameplay data to train RL agents. These Data Capsules enable indexing of contributions, distribution of rewards, and tracking of user inputs and campaign outcomes.

    • When campaigns end, players can burn their Data Slots to claim rewards and retrieve their staked $NRN.

AI Arena

  • A cult favorite for competitive players with a skill-based wagering system

    • $NRN powers the in-game economy and enables skill-based staking mechanics in the game

Ecosystem Revenue Model

Details of $NRN Utility

NRN B2B Integrations

NRN already has clients paying to integrate into the SDK. Clients generally pay in stablecoin or project's native tokens.

These revenues flow into $NRN community treasury.

  • The majority of the revenue will be kept in the Treasury and managed strategically. These funds can be used to buyback $NRN on the open market, kept in the treasury, or sold to build stablecoin reserves and used to fund for ecosystem growth initiatives.

  • A portion of the revenue will be used as incentives for players on the Trainer Platform to submit trained agents for use in games that onboard into the SDK.

  • The specific allocation of integration revenue received between Treasury and Trainer Platform portion will be announced for each integration.

NRN B2B Trainer Platform

Users can sell various models, including starter versions for those new to model training. In addition, gameplay data and configuration presets can be listed, bought and sold on the marketplace as well. Revenue to the $NRN Treasury from the Trainer Platform include:

  • Training compute fee for model updating

  • Marketplace fee

AI Arena

AI Arena will continue to be an anchor in the economy. Revenues from AI Arena includes:

  • In-game item purchases

  • Seasonal cosmetic updates

  • On-chain attribute re-rolls

  • Entry fees for special tournaments

  • Others TBA

For more information of AI Arena's in game economy please visit the AI Arena documentation site.

NRN RL

Primary Utilities of $NRN in NRN RL:

  • High volumes of $NRN are locked for creating Data Capsules

  • $NRN also used to stake for untrained RL Agents

For more details visit the how NRN RL agents are trained.

Revenue Model and Prize Pool Capitalization:

  • Third-Party Token Pools: Future NRN RL campaigns will utilize prize pools capitalized by third-party tokens (e.g., TGEs for partner games), as well as data subsidies provided by data protocols.

$NRN Token Allocation & Vesting

Token TGE: June 2024

Category
% of Total Supply
Unlocked at TGE
Vesting Schedule

Contributors

35.8%

0%

12 m lock | 1/24 m Linear

Total 36 months

Investors

14.2%

0%

12 m lock | 1/24 m Linear

Total 36 months

Foundation Treasury

10.9%

1.9%

Project controlled

Foundation OTC Sale

1.1%

0%

6m lock | 1/12 m Linear Total 18 months

Community TGE Airdrop

8.0%

8%

Fully unlocked at TGE

Community & Ecosystem Rewards

30%

30%

Project controlled

Total

100%

40%