PLAY THE GAME

BASICS

Overview

The Innovation Game (TIG) is the first and only protocol designed specifically to accelerate algorithmic innovation. At the core of TIG lies a novel variant of proof-of-work called optimisable proof-of-work (OPoW).

OPoW uniquely can integrate multiple proof-of-works, “binding” them together in a manner that prevents centralisation due to optimisations of the proof-of-work algorithms (see Section 2.1 of the TIG white paper for details). This resolves a longstanding issue that had hindered proof-of-work from being based on real-world computational scientific challenges.

TIG combines a crypto-economic framework with OPoW to:

  1. Incentivise miners, referred to as Benchmarkers, to adopt the most efficient algorithms (for performing proof-of-work) that are contributed openly to TIG. This incentive is derived from sharing block rewards proportional to the number of solutions found.
  2. Incentivise contributors, known as Innovators, to optimise existing proof-of-work algorithms and invent new ones. The incentive is provided by the prospect of earning a share of the block rewards based on adoption of their algorithms by Benchmarkers.

TIG will progressively phase in proof-of-works over time, directing innovative efforts towards the most significant challenges in science.

Blocks

In TIG, a block serves as the fundamental unit of time, roughly equivalent to 60 seconds. Blocks fulfil two primary functions:

  1. Timestamp for when Benchmarkers start & submit their solutions
  2. Execution of OPoW-related state transitions, determining Benchmarkers’ influence and algorithm adoption, leading to the distribution of block rewards denominated in TIG tokens.

Rounds

A round spans 10,080 blocks, approximately equivalent to 604,800 seconds or 7 days. Rounds serve three primary purposes:

  1. Execution of algorithm related state transitions.
  2. Coordination of protocol and configuration updates, including the introduction of new challenges and voting procedures.
  3. The token emission schedule is structured around rounds.

Token Emission Schedule

TIG’s token emission schedule comprises 5 tranches, each with the same total emission of 26,208,000 TIG, but successively doubling in duration (measured in rounds):

Tranche
#Rounds
Token emission per block
Token emissions per round
Start date
End date
1
26
100
1,008,000
24 Nov 2023
1 June 2024
2
52
50
504,000
1 June 2024
31 May 2025*
3
104
25
252,000
31 May 2025*
29 May 2027*
4
208
12.5
126,000
30 May 2027*
24 May 2031*
5
416
6.25
63,000
25 May 2031*
14 May 2039*
*Approximates

Post tranche 5, rewards are solely based on tokens generated from TIG Commercial license fees.

TIG Token

The TIG Token is currently deployed as a ERC20 smart contract on Base at 0x0C03Ce270B4826Ec62e7DD007f0B716068639F7B.

CHALLENGES

A challenge within the context of TIG is a computational problem adapted as one of the proof-of-works in OPoW. Presently, TIG features three challenges: boolean satisfiability, vehicle routing, and the knapsack problem. Over the coming year, an additional seven challenges from domains including artificial intelligence, biology, medicine, and climate science will be phased in.

The Innovation Game focuses on a category of problems that we call “asymmetric” problems. These are problems that require significant computational effort to solve, but once a solution is proposed, verifying its correctness is relatively straightforward.

Some areas where asymmetric Challenges potentially suitable for The Innovation Game include:

  • Mathematical Problems. There are a great many examples of asymmetric problems in mathematics, from generating a mathematical proof to computing solutions to an equation. Zero knowledge proof (ZKP) generation, Prime factorisation, and the discrete logarithm problem are further examples of asymmetric problems which have significant implications in cryptography and number theory. NP-complete problems are simple to check and generally considered unsolvable within polynomial time. Such problems are fundamental in science and engineering. Examples include the Hamiltonian Cycle Problem and the Boolean Satisfiability Problem (SAT).
  • Optimisation Problems. Optimisation problems are at the core of numerous scientific, engineering, and economic applications. They involve finding the best solution from all feasible solutions, often under a set of constraints. Notable examples include the Travelling Salesman Problem and the Graph Colouring Problem. Optimisation problems are also central to the training of machine learning models, and the design of machine learning architectures such as the Transformer neural network architecture. These include gradient descent, backpropagation, convex optimisation.
  • Simulations. Simulations are powerful tools for modelling and understanding complex systems, from weather patterns to financial markets. While simulations themselves may not always be asymmetric problems, simulations may involve solving problems that are asymmetric, and these problems may be suitable for The Innovation Game. For example, simulations often require the repeated numerical solving of equations, where this numerical solving is an asymmetric problem.
  • Inverse Problems. Inverse problems involve deducing system parameters from observed data and are generically asymmetric. These problems are ubiquitous in fields like geophysics, medical imaging, and astronomy. For example, in medical imaging, reconstructing an image from a series of projections is an inverse problem, as seen in computed tomography (CT) scans.
  • General Computations. Any calculation can be made efficiently verifiable using a technique called “verifiable computation”. In verifiable computation, the agent performing the computation also generates a proof (such as a zero knowledge proof) that the computation was performed correctly. A verifier can then check the proof to ensure the correctness of the computation without needing to repeat the computation itself.

Beyond the initial set of challenges, TIG’s roadmap includes the establishment of a scientific committee tasked with sourcing diverse computational problems.

This chapter covers the following topics:

  1. What is the relation between Benchmarkers and Challenges
  2. How real world computational problems are adapted for proof-of-work
  3. Regulation of the network’s computational load for verifying solutions

Challenge Instances & Solutions

A challenge constitutes a computational problem from which instances can be deterministically pseudo-randomly generated given a seed and difficulty. Benchmarkers iterate over nonces to generate seeds and subsequently produce challenge instances to compute solutions using an algorithm.

Each challenge also stipulates the method for verifying whether a “solution” indeed solves a challenge instance. Since challenge instances are deterministic, anyone can verify the validity of solutions submitted by Benchmarkers.

Notes:

  • The minimum difficulty of each challenge ensures a minimum of \(10^{15}\) unique instances. This number increases further as difficulty increases.
  • Some instances may lack a solution, while others may possess multiple solutions.
  • Algorithms are not guaranteed to find a solution.

Adapting Real World Computational Problems

Computational problems with scientific or technological applications typically feature multiple difficulty parameters. These parameters may control factors such as the accuracy threshold for a solution and the size of the challenge instance.

For example, TIG’s version of the Capacitated Vehicle Routing Problem (CVRP) incorporates two difficulty parameters: the number of customers (nodes) and the factor by which a solution’s total distance must surpass the baseline value.

TIG’s inclusion of multiple difficulty parameters in proof-of-work sets it apart from other proof-of-work cryptocurrencies, necessitating innovative mechanisms to address two key issues:

  1. Valuing solutions of varying difficulties for comparison
  2. How difficulty with multiple parameters should be adjusted

Notes:

  • Difficulty parameters are always integers for reproducibility, with fixed-point numbers used if decimals are necessary.
  • The expected computational cost to compute a solution rises monotonically with difficulty.

Pareto Frontiers & Qualifiers

The issue of valuing solutions of different difficulties can be deconstructed into three sub-issues:

  1. There is no explicit value function that can ”fairly” flatten difficulties onto a single dimension without introducing bias
  2. Setting a single difficulty will avoid this issue, but will excessively limit the scope of innovation for algorithms and hardware
  3. Assigning the same value to solutions no matter their difficulty would lead to Benchmarkers “spamming” solutions at the easiest difficulty

The key insight behind TIG’s Pareto frontiers mechanism (described below) is that the value function does not have to be explicit, but rather can be fluidly discoverable by Benchmarkers in a decentralised setting by allowing them to strike a balance between the difficulty they select and the number of solutions they can compute.

This emergent value function is naturally discovered as Benchmarkers, each guided by their unique “value function”, consistently select difficulties they perceive as offering the highest value. This process allows them to exploit inefficiencies until they converge upon a set of difficulties where no further inefficiencies remain to be exploited; in other words, staying at the same difficulties becomes more efficient, while increasing or decreasing would be inefficient.

Changes such as Benchmarkers going online/offline, availability of more performant hardware/algorithms, etc will disrupt this equilibrium, leading to a new emergent value function being discovered.

The Pareto frontiers mechanism works as follows:

  1. Plot the difficulties for all active solutions or benchmarks.
  2. Identify the hardest difficulties based on the Pareto frontier and designate their solutions as qualifiers.
  3. Update the total number of qualifying solutions.
  4. If the total number of qualifiers is below a threshold, repeat the process.

*The threshold number of qualifiers is currently set to 5,000.

Notes:

  • Qualifiers for each challenge are determined every block.
  • Only qualifiers are utilised to determine a Benchmarker’s influence and an Algorithm’s adoption, earning the respective Benchmarker and Innovator a share of the block rewards.
  • The total number of qualifiers may be over the threshold. For example, if the first frontier may have 4500 solutions, the second frontier may have 900 solutions, then 5400 qualifiers are rewarded

Difficulty Adjustment

Every block, the qualifiers for a challenge dictate its difficulty range. Benchmarkers, when initiating a new benchmark, must reference a specific challenge and block in their benchmark settings before selecting a difficulty within the challenge’s difficulty range.

A challenge’s difficulty range is determined as follows:

  1. From the qualifiers, filter out the lowest difficulties based on the Pareto frontier to establish the base frontier.
  2. Calculate a difficulty multiplier (capped to 2.0)
    1. difficulty multiplier = number of qualifiers / threshold number of qualifiers
    2. e.g. if there are 1500 qualifiers and the threshold is 1000, the multiplier is 1500/1000 = 1.5
  3. Multiply the base frontier by the difficulty multiplier to determine the upper or lower bound.
    1. If multiplier > 1, base frontier is the lower bound
    2. If multiplier < 1, base frontier is the upper bound

The following Benchmarker behaviour is expected:


  • When number of qualifiers is higher than threshold: Benchmarkers will naturally select increasingly large difficulties so that their solutions stay on the frontiers for as long as possible, as only qualifiers count towards influence and result in a share of the block rewards.
  • When number of qualifiers is equal to threshold: Benchmarkers will stay at the same difficulty
  • When number of qualifiers is lower than threshold: Benchmarkers will naturally select smaller difficulties to compute more solutions which will be qualifiers.

Regulating Verification Load

Verification of solutions constitutes almost the entirety of the computation load for TIG’s network in the early phase of deployment. In addition to probabilistic verification which drastically reduces the number of solutions that require verification, TIG employs a solution signature threshold mechanism to regulate the rate of solutions and the verification load of each solution.

Solution Signature

A solution signature is a unique identifier for each solution derived from hashing the solution and its runtime signature. To be considered valid, this signature must fall below a dynamically adjusted threshold.

Each challenge possesses its own dynamically adjusted solution signature threshold which begins at 100% and can be adjusted by a maximum of 0.5% per block. The solution signature threshold adjusts the probability of a solution being submittable to TIG. Lowering the threshold has the effect of reducing this probability, thereby decreasing the overall rate of solutions being submitted. As a result the difficulty range of the challenge will also decrease. Increasing the threshold has the opposite effect.

There are 2 feedback loops which adjusts the threshold:

Target fuel consumption (currently disabled). The execution of an algorithm is performed through a WASM Virtual Machine which tracks “fuel consumption”, a proxy for the real runtime of the algorithm. Fuel consumption is deterministic and is submitted by Benchmarkers when submitting solutions.

Another motivation for targeting a specific fuel consumption is to maintain a fair and decentralised system. If the runtime approaches the lifespan of a solution, raw speed (as opposed to efficiency) would become the dominant factor, potentially giving a significant advantage to certain types of specialised hardware architectures (such as those found in “supercomputers”) that prioritise speed over efficiency (which is undesirable).

Target solutions rate. Solutions rate is determined every block based on “mempool” proofs that are being confirmed. (Each proof is associated with a benchmark, containing a number of solutions).

Spikes in solutions rate can occur when there is a sudden surge of new Benchmarkers/compute power coming online. If left unregulated, the difficulty should eventually rise such that the solution rate settles to an equilibrium rate, but this may take a prolonged period causing a strain on the network from the large verification load. To smooth out the verification load, TIG targets a specific solutions rate.

INNOVATORS

Innovators are players in TIG who optimise existing proof-of-work algorithms and/or invent new ones, contributing them to TIG in the hope of earning token rewards.

  1. This chapter covers the following topics:
  2. The two types of algorithm submissions
  3. Mechanisms for maintaining a decentralised repository
  4. How algorithms are executed by Benchmarkers
  5. How algorithms earn token rewards

Types of Algorithm Submissions

There are two types of algorithm submissions in TIG:

  1. Code submissions
  2. Breakthrough submissions

Code submissions encompass porting an existing algorithm for use in TIG, optimising the performance of an algorithm previously submitted by another Innovator, or an implementation of an entirely new algorithm. Code submissions must implement a solve_challenge function.

Presently, code submissions are restricted to Rust, automatically compiled into WebAssembly (WASM) for execution by Benchmarkers. Rust was chosen for its performance advantages over other languages, enhancing commercial viability of algorithms contributed to TIG, particularly in high-performance computing. Future iterations of TIG will support additional languages compilable to WASM.

Breakthrough submissions involve the introduction of novel algorithms tailored to solve TIG’s proof-of-work challenges. A breakthrough submission will often yield such a significant performance enhancement that even unoptimised code of the new algorithm outpaces the most optimised code of an existing one.

Note: Support for breakthrough submissions is not currently in place but will be available in the coming months (pending a sufficiently wide token distribution).

Decentralised Repository

Algorithms are contributed to a repository without a centralised gatekeeper. TIG addresses crucial issues such as spam and piracy to ensure fair rewards for Innovators based on performance, maintaining a strong incentive for innovation.

To combat spam, Innovators must pay a submission fee of 0.001 ETH, burnt by sending it to the null address (0x0000000000000000000000000000000000000000). In the future, this fee will be denominated in TIG tokens.

To address the possibility of piracy and to provide an opportunity for IP protection, TIG implements a “push delay” and “merge points” mechanism:

Push Delay Mechanism

Upon submission, algorithms are committed to their own branch and pushed to a private repository. Following successful compilation into WebAssembly (WASM), a delay of 2 rounds ensues before the algorithm is made public where the branch is pushed to TIG’s public repository. This delay safeguards Innovators’ contributions, allowing them time to benefit before others can optimise upon or pirate their work.

Notes:

  • Confirmation of an algorithm’s submission occurs in the next block, determining the submission round.
  • An algorithm submitted in round X is made public at the onset of round X + 2.

Merge Points Mechanism

This mechanism aims to deter algorithm piracy. For every block in which an algorithm achieves at least 25% adoption, it earns a merge point alongside a share of the block reward based on its adoption.

At the end of each round, the algorithm from each challenge with the most merge points (exceeding a minimum threshold of 5,040) is merged into the repository’s main branch. Merge points reset each round.

Merged algorithms, as long as their adoption is above 0%, share in block rewards every block.

The barrier for an Innovator contribution to be merged is intentionally chosen to be relatively high to minimise the likely payoff for pirating algorithms.

For algorithmic breakthrough submissions, the vote for recognising the algorithm as a breakthrough starts only when its code gets merged (details to come). This barrier is based on TIG’s expectation that breakthroughs will demonstrate distinct performance improvements, ensuring high adoption even in unoptimised code.

Deterministic Execution

Algorithms in TIG are compiled into WebAssembly (WASM), facilitating execution by a corresponding WASM Virtual Machine. This environment, based on wasmi developed by Parity Technologies for blockchain applications, enables tracking of fuel consumption, imposition of memory limits, and has tools for deterministic compilation.

Benchmarkers must download the WASM blob for their selected algorithm from TIG’s repository before executing it using TIG’s WASM Virtual Machine.

Notes:

  • The WASM Virtual Machine functions as a sandbox environment, safeguarding against excessive runtime, memory usage, and malicious actions.
  • Advanced Benchmarkers may opt to compile algorithms into binary executables for more efficient nonce searches, following thorough vetting of the code.

Runtime Signature

As an algorithm is executed by TIG’s WASM Virtual Machine, a “runtime signature” is updated every opcode using the stack variables. This runtime signature is unique to the algorithm and instance of challenge, and is used to verify the algorithm used by Benchmarkers in their settings.

Sharing in Block Rewards

TIG incentivises algorithm contributions through block rewards:
15% of block rewards are allocated evenly across challenges with at least one “pushed” algorithm before distributing pro-rata based on adoption rates.

In the future, a fixed percentage (we intend 15% of block rewards, see below) will be assigned to the latest algorithmic breakthrough for each challenge. In the absence of a breakthrough, this percentage reverts back to the Benchmarkers’ pool. Given the expected relative rarity of algorithmic breakthroughs (compared to code optimisations), this represents a significant reward, reflecting TIG’s emphasis on breakthrough innovations.

When the rewards stream for algorithmic breakthroughs is introduced, there will be a total of 30% of block rewards for Innovators and 70% for Benchmarkers. Over time, we intend for the percentage of block rewards for Innovators to approach 50%.

BENCHMARKERS

Benchmarkers are players in TIG who continuously select algorithms to compute solutions for challenges and submit them to TIG through benchmarks and proofs to earn block rewards.

This chapter covers the following topics:

  1. How solutions are computed
  2. How solutions are submitted
  3. How solutions are verified
  4. How solutions earn block rewards

Computing Solutions

The process of benchmarking comprises 3 steps:

  1. Selecting benchmark settings
  2. Generate challenge instances
  3. Execute algorithm on instances and record solutions

Apart from algorithm selection, this process is entirely automated by the browser benchmarker.

Benchmark Settings

A Benchmarker must select their settings, comprising 5 fields, before benchmarking can begin:

  1. Player Id
  2. Challenge Id
  3. Algorithm Id
  4. Block Id
  5. Difficulty

Player Id is the address of the Benchmarker. This prevents fraudulent re-use of solutions computed by another Benchmarker.

Challenge Id identifies the proof-of-work challenge for which the Benchmarker is attempting to compute solutions. The challenge must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on minimising their imbalance. Note: Imbalance minimisation is the default strategy for the browser benchmarker.

Algorithm Id is the proof-of-work algorithm that the Benchmarker wants to use to compute solutions. The algorithm must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on the algorithm’s performance in computing solutions.

Block Id is a reference block from which the lifespan of the solutions begins counting down. Benchmarkers are incentivised to reference the latest block as to maximise the remaining lifespan of any computed solutions.

Difficulty is the difficulty of the challenge instances for which the Benchmarker is attempting to compute solutions. The difficulty must lie within the valid range of the challenge for the referenced block. Benchmarkers are incentivised to make their selection to strike a balance between the number of blocks for which their solution will remain a qualifier, and the number of solutions they can compute. (e.g. lower difficulty may mean more solutions, but may lower the number of blocks that the solutions remain qualifiers)

Unpredictable Challenge Instances

TIG makes it intractable for Benchmarkers to attempt to re-use solutions by:

  1. Challenge instances are deterministically pseudo-randomly generated, with at least \(10^{15}\) unique instances even at minimum difficulty.
  2. Instance seeds are computed by hashing benchmark settings and XOR-ing with a nonce, ensuring randomness.

During benchmarking, Benchmarkers iterate over nonces for seed and instance generation.

Algorithm Execution

Active algorithms reside as compiled WebAssembly (WASM) blobs in TIG’s open repository:

https://raw.githubusercontent.com/tig-foundation/tig-monorepo/<branch>/tig-algorithms/wasm/<branch>.wasm

where <branch> is <challenge_name>/<algorithm_name>

Benchmarkers download the relevant WASM blob for their chosen algorithm, execute it using TIG’s WASM Virtual Machine with specified seed and difficulty inputs.

If a solution is found, the following data is outputted:

  1. Nonce
  2. Runtime signature
  3. Fuel consumed
  4. Serialised solution

From this data, Benchmarkers compute the solution signature and retain the solution only if it meets the challenge’s threshold.

Submitting Solutions

The process of submitting solutions comprises 4 steps:

  1. Submit the benchmark
  2. Await probabilistic verification
  3. Submit the proof
  4. Await submission delay
Submitting Benchmark

A benchmark, a lightweight batch of valid solutions found using identical settings, includes:

  • Benchmark settings
  • Metadata for solutions
  • Data for a single solution

Benchmark settings must be unique, i.e. the same settings can only be submitted once.

Metadata for a solution consists of its nonce and solution signature. Nonces must be unique and all solution signatures must be under the threshold for the referenced challenge & block.

Data for a solution consists of its nonce, runtime signature, fuel consumed and the serialised solution. The solution for which data must be submitted is randomly sampled. TIG requires this data as Sybil-defence against fraudulent benchmarks.

Probabilistic Verification

Upon benchmark submission, it enters the mempool for inclusion in the next block. When the benchmark is confirmed into a block, up to three unique nonces are sampled from the metadata, and corresponding solution data must be submitted by Benchmarkers.

TIG refers to this sampling as probabilistic verification, and ensures its unpredictability by using both the new block id and benchmark id in seeding the pseudo-random number generator. Probabilistic verification not only drastically reduces the amount of solution data that gets submitted to TIG, but also renders it irrational to fraudulently “pad” a benchmark with fake solutions:

If a Benchmarker computes N solutions, and pads M fake solutions to the benchmark for a total of N + M solutions, then the chance of getting away with this is \(\left(\frac{N}{N+M}\right)^3\). The expected payoff for honesty (N solutions always accepted) is always greater than the payoff for fraudulence (N+M solutions sometimes accepted):

$$N > (N + M) \cdot \left(\frac{N}{N+M}\right)^3$$

$$1 > \left(\frac{N}{N+M}\right)^2$$

Submitting Proof

A proof includes the following fields:

  • Benchmark id
  • Array of solution data

Benchmark id refers to the benchmark for which a proof is being submitted. Only one proof can be submitted per benchmark.

Array of solution data must correspond to the nonces sampled from the benchmark’s solutions metadata.

Submission Delay & Lifespan mechanism

Upon confirmation of a proof submission, a submission delay is determined based on the block gap between when the benchmark started and when its proof was confirmed.

A submission delay penalty is calculated by multiplying the submission delay by a multiplier (currently set to 3). If the penalty is X and the proof was confirmed at block Y, then the benchmark’s solutions only become “active” (eligible to potentially be qualifiers and share in block rewards) from block X + Y onwards.

As TIG imposes a lifespan, the maximum number of blocks that a solution can be active (currently set to 120 blocks), there is a strong incentive for Benchmarkers to submit solutions as soon as possible.

Verification of Solutions

Two types of verification are performed on solutions submitted to TIG to safeguard algorithm adoption against manipulation:

  1. Verification of serialised solutions against challenge instances, triggered during benchmark and proof submission.
  2. Verification of the algorithm that the Benchmarker claims to have used, involving re-running the algorithm against the challenge instance before checking that the same solution data is reproduced.

If verification fails, the benchmark is flagged as fraudulent, disqualifying its solutions. In the future (when Benchmarker deposits are introduced) a slashing penalty will be applied.

Sharing in Block Rewards

Every block, 85% of block rewards are distributed pro-rata amongst Benchmarkers based on influence. A Benchmarker’s influence is based on their fraction of qualifying solutions across challenges with only active solutions eligible.

Cutoff Mechanism

To strongly disincentivise Benchmarkers from focusing only on a single challenge (e.g. benchmarking their own algorithm), TIG employs a cutoff mechanism. This mechanism limits the maximum qualifiers per challenge based on the minimum number of solutions across challenges multiplied by a multiplier (currently set to 1.1).

The multiplier is such that the cutoff mechanism will not affect normal benchmarking in 99.9% of cases.

OPTIMISABLE PROOF-OF-WORK

Optimisable proof-of-work (OPoW) uniquely can integrate multiple proof-of-works, “binding” them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarker’s influence and the fraction of qualifiers they computed using that algorithm.

Rewards for Benchmarkers

OPoW introduces a novel metric, imbalance, aimed at quantifying the degree to which a Benchmarker spreads their computational work between challenges unevenly. This is only possible when there are multiple proof-of-works.

The metric is defined as:

$$imbalance = \frac{C_v(\%qualifiers)^2}{N-1}$$

where \(C_V\) is coefficient of variation, %qualifiers is the fraction of qualifiers found by a Benchmarker across challenges, and N is the number of active challenges. This metric ranges from 0 to 1, where lower values signify less centralisation.

Penalising imbalance is achieved through:

$$imbalance\_penalty = 1 - exp(- k \cdot imbalance)$$

where k is a coefficient (currently set to 1.5). The imbalance penalty ranges from 0 to 1, where 0 signifies no penalty.

When block rewards are distributed pro-rata amongst Benchmarkers after applying their imbalance penalty, the result is that Benchmarkers are incentivised to minimise their imbalance as to maximise their reward:

$$benchmarker\_reward \propto mean(\%qualifiers) \cdot (1 - imbalance\_penalty)$$

where \(\%qualifiers\) is the fraction of qualifiers found by a Benchmarker for a particular challenge

Notes:

  • A Benchmarker focusing solely on a single challenge will exhibit a maximum imbalance and therefore maximum penalty.
  • Conversely, a Benchmarker with an equal fraction of qualifiers across all challenges will demonstrate a minimum imbalance value of 0.

Rewards for Innovators

In order to guard against potential manipulation of algorithm adoption by Benchmarkers, Innovator rewards are linked to Benchmarker rewards (where imbalance is heavily penalised):

$$innovator\_reward \propto \sum_{benchmarkers} benchmarker\_reward \cdot algorithm\_\%qualifiers$$

Where \(algorithm\_\%qualifiers\) is the fraction of qualifiers found by a Benchmarker using a particular algorithm (the algorithm submitted by the Innovator)

ROADMAP

The technical roadmap for TIG in 2024 is as follows:

Feature
Approximate date
New challenge (c004)
July
New challenge (c005)
Aug
Locked Deposits (necessary for voting)
Sep
Benchmarker Staking
Oct
Breakthrough Voting
Nov
New challenge (c006)
Dec

TIG intends to migrate to a L1 blockchain where OPoW is integrated with the consensus layer in 2025. This will leverage Polkadot’s substrate. (TIG is currently running in an off-chain execution & on-chain settlement configuration.)

Play TIG

and help solve Humanity’s biggest challenges