Technology
The heart of the project is 3T which is the Tactical Trend Trader. This collection of integrated components represents the technical manifestation of the rational, ordered, and powerful system that executes the mission. It is a sophisticated algorithmic and fully automated trading agent built upon a foundation of artificial intelligence, quantitative analysis, and decentralized finance protocols.
Phases
3T operates in a dual-state cycle, a standard and robust architecture for deployed machine learning systems.
TRAIN: This is the offline learning and validation stage. 3T's core models are trained on historical trading performance, encompassing feature values, and how they react to changing market conditions in terms of expected profit and loss during trading. During this phase, the models learn to identify complex, nonlinear patterns that correlate with future market movements. This phase includes rigorous backtesting and cross-validation to ensure the models are robust but also employing methods to ensure resultant models are not overfitted to historical data. The output of this phase is a validated, production-ready model.
INFERENCE: This is the live, autonomous operational stage. The trained and validated model is deployed into the market, where it connects to real-time data feeds and can graduate to risk on positions in market. The model ingests live market data, processes it through its learned parameters, and generates trading decisions. These decisions are then translated into orders and executed automatically via the exchange's API. This cycle of data ingestion, decision-making, and execution runs continuously.
TP (Take Profit): Once the portfolio managed by each individual INFERENCE node has hit its objective, all strategies are exited and all positions are flattened. Then the portfolio is rebuilt from scratch and the process repeats.
Bagged Ensembles
The predictive core of 3T is an ensemble of models including Artificial Neural Networks (ANNs) configured for standard regression predictions. The objective of the regression model is not to predict a precise future price, but rather to forecast a probable trading portfolio management approach which capitalizes on a directional movement, which then informs discrete trading actions.
Machine Learning Approach
The fundamental building block of 3T is an ensemble of feedforward neural networks, also known as a multilayer perceptron, tree-based models like LightGBM, CatBoost, or XGBoost and potentially others like Random Forest and k-Nearest Neighbors. For a given input vector of trading/risk management features x, the network produces a prediction on expected profit or loss potential as y(x) through non-quantile (standard) regression with the goal of providing a conditional mean of expected strategy performance based on features selected.
Bootstrap Aggregating (Bagging)
A single ANN can be prone to high variance, meaning its performance can be sensitive to the specific training data it sees. This is a significant risk in financial markets, which are inherently noisy and non-stationary. To combat this, 3T employs an ensemble technique known as Bootstrap Aggregating, or Bagging.
The bagging process works as follows :
Bootstrapping: From the original training dataset of size N, multiple new datasets (bootstrap samples) of the same size N are created by sampling with replacement. This means some data points from the original set may appear multiple times in a given sample, while others may not appear at all.
Parallel Training: An independent ANN model is trained on each of these bootstrap samples. Because each model sees a slightly different subset of the data, they will learn slightly different parameters and internal representations of the most optimal feature values for each trading market.
Aggregation: When a new prediction is required during the INFERENCE phase, the input data is fed to all the trained ANNs in the ensemble. For a regression task, the final prediction is the average of the outputs from all the individual models.
This process significantly reduces the overall variance of the prediction without substantially increasing bias. The averaging smoothes out the idiosyncrasies of individual models, leading to a more stable, robust, and reliable final output, which is critical for consistent performance in financial applications.
Training and Validation Protocol
The training of the bagged ensemble is a computationally intensive process governed by rigorous statistical protocols to ensure its validity and guard against overfitting.
Loss Function: 3T's ANNs are primarily trained using the Root Mean Squared Error (RMSE) as the loss function. The goal of the training algorithm is to adjust the network's weights (w) to minimize this value. RMSE is chosen because it heavily penalizes larger errors, pushing the model to be more accurate on significant deviations. Its main advantage over MSE is that it expresses the error in the same units as the target variable, making it more interpretable.
Standard k-fold cross-validation: which involves random shuffling of data, is utilized to evaluate how features translate to protfitability prediction without the introduction of "lookahead bias" since the training datasets are not timeseries based. The only "lookahead" value provided to the system is P&L which is used for validation only.
This rigorous validation approach provides a much more realistic estimate of how the model will perform in a live, forward-looking trading environment.
Threshold Optimization
A core innovation of 3T is around how time and market data pricing behavior are handled. This allows for a set of dynamic features that introduce delays for a state-contingent decision-timing framework. These are optimized through the ANN training described above. It is 3T's mechanism for exercising algorithmic prudence, a direct counterpoint to the high-frequency trading (HFT) paradigm that prioritizes speed above all else. While HFT systems race to react within microseconds, often amplifying noise and contributing to market instability, 3T is designed to:
wait patiently for a clear, high-quality signal to emerge from the noise
treat many variable positions as a single integrated portfolio
when portfolio components become negative their risk allocation is removed but they continue to be monitored
when the overall portfolio is in a "draw down" state, this is considered a temporary credit facility to the instrument markets being traded which is managed through careful calibration of position size and margin to ensure the portfolio can sustain itself until hitting the expected profit objective.
introduce position 'bake time' which can be understood as a direct technical implementation of the cardinal virtue of prudence. It is an algorithmic discernment, a programmed patience that waits for clarity before acting. This deliberate approach is designed to improve the quality of trading decisions, normalize annualized percentage rates (APR) by avoiding trades in highly uncertain conditions, and ultimately enhance the long-term stability and profitability of the system.
Entropy and Oscillation
The decision add or remove risk to a position is partially governed by a systematic, quantitative assessment of the market state, based on two primary metrics calculated from real-time price data streams:
Market Entropy: Entropy is a measure of randomness, disorder, and uncertainty. In financial markets, high entropy corresponds to periods of noise, irrational behavior, and unpredictability, making it difficult to discern underlying trends. 3T utilizes the Python library
antropy
to calculate the entropy of recent price distributions. A trading decision is only considered when entropy falls below a dynamically-tuned threshold, indicating that the market has entered a more ordered, predictable state where the signal-to-noise ratio is high.Hurst parameter: Fundamentally the Hurst exponent (H) is a statistical measure of the long-term memory or persistence of a time series. It allows 3T to classify the market's dynamic regime. Using the
nolds
library, 3T continuously calculates H measures. A trade is only deployed when a persistent state is identified.
This is the most active research area inspired by earlier work in ARIMA and GARCH and not currently part of live execution.
High
H≈0.5 (Random)
WAIT
Maximum uncertainty. The market is noisy and has no discernible structure.
High
H≠0.5 (Persistent)
WAIT
A potential trend or mean-reverting structure exists, but it is obscured by high noise. The signal is unreliable.
Low
H≈0.5 (Random)
WAIT
The market is not noisy, but it lacks a predictable structure. Future movements are random.
Low
H≠0.5 (Persistent)
ACT
Optimal Condition. The market is clear (low noise) and has a predictable structure (trending or mean-reverting). Deploy strategy.
Proof of Concept HyperLiquid DEX Integration
3T currently executes its strategies on the HyperLiquid decentralized exchange (DEX). This venue was specifically chosen for its unique combination of features that are critical for 3T's performance and mission alignment. To see the current proof of concept in action visit:
Or utilize your preferred analytics tool with the account:
0x77dEe11bbF0Ad1143Bf008a84AE776557e2ebADE
Execution Venue
HyperLiquid is built on its own high-performance Layer-1 blockchain, which supports a fully on-chain Central Limit Order Book (CLOB). This architecture provides the low-latency, high-throughput (up to 200,000 orders per second), and advanced order types characteristic of a centralized exchange (CEX), while preserving the core DeFi principles of transparency, non-custodial asset management, and permissionless access. This hybrid model offers the ideal environment for a sophisticated algorithmic agent, combining CEX-grade performance with blockchain-native integrity.
Once a symbol's simulated porfolio P&L shows a gain, 3T constructs and signs a transaction that is broadcasted to the exchange on-chain via market "liquidity taker" orders.
Instruments & Data
3T's initial deployment focuses on a curated set of the liquid perpetual contracts available on the HyperLiquid platform. This focus ensures sufficient market depth to execute trades with minimal slippage and price impact. 3T's end-to-end data and execution flow is managed through HyperLiquid's public APIs. For streaming time and salesdata, 3T establishes a persistent connection to the HyperLiquid WebSocket API.
Tools
3T is built upon a foundation of robust, well-vetted, and primarily open-source technologies. This commitment to using industry-standard tools ensures reliability, facilitates maintenance, and invites collaboration from the broader community of developers and quantitative researchers. The following table outlines the core components of our technology stack.
PyTorch / AutoGluon
AI/ML Framework
Used for designing, training, and deploying the core Artificial Neural Network (ANN) models. The dynamic computation graph and Tabular Predictors are ideal for research and development.
antropy
Entropy Calculation
Used to compute the entropy of price distributions, providing a quantitative measure of market uncertainty and noise. This is the second key input for the 'bake time' framework.
Exchange API Interaction
The official Python SDK for interacting with HyperLiquid's REST and WebSocket APIs, enabling real-time data ingestion and automated trade execution.
pandas / numpy
Data Manipulation & Computation
The foundational libraries for all quantitative analysis in Python. Used for handling time-series data, performing numerical operations, and preparing data for the ANN models.
nolds (Current Reserach Focus)
Nonlinear Time-Series Analysis
A specialized library used to calculate the detrended fluctuation to help further classify out of sample market regime (trending vs. mean-reverting) during inference.
Last updated