Institutional AI and blockchain adoption are accelerating, with Jimmy Hu of Tensor Investment Corp outlining key opportunities and risks.

With institutional interest in AI and blockchain accelerating, we had the chance to interview Jimmy Hu, a unique voice at the intersection of AI, capital markets, and decentralized infrastructure.

Jimmy serves as Chief AI Officer at Tensor Investment Corp, an Asia-based proprietary trading firm, and Head of AI at Bondi Tech, the largest high-frequency commodity options market maker in China. He also previously founded APEX Technologies, one of the earliest companies in China focused on enterprise machine learning and predictive AI, and previously worked in data science at Microsoft. Today, he advises hedge funds and trading firms on building AI infrastructure for building a differentiated edge in markets.

Part 1: Where AI and Crypto Are Actually Converging

1. You’ve long worked at the intersection of AI and institutional finance. There’s a lot of hype around the AI and blockchain convergence, but where are you actually seeing real traction, especially in production environments or capital markets?

The most direct and growing convergence of AI and crypto is, you guessed it, trading. AI is extensively used to trade crypto at the very high levels, across all liquid assets, and across various strategies and trading frequencies. Whether it’d be CTA trend following, HFT market making, or stat arb, the top shops involved in crypto are using AI one way or another to create that extra edge and to make models more adaptive to the ever-changing market environment. Of course, at this level, the AI we are referring to is not LLMs or agentic AI, but rather AI systems built for time series prediction using deep learning and reinforcement learning.

Of course, there are LLM-based agents and research tools used by retail crypto traders for investment research, but these rarely tend to be a main topic of discussion or usage within the quant shops. The technology that the AI-enabled trading shops use is arguably older technology – but the implementation is far from trivial. Honestly speaking, from our empirical studies in Asia, even the top quantitative hedge funds do not have a completely confident, comprehensive understanding of how to implement deep learning into a systematic, robust, and adaptive pipeline for their core trading strategies. Others are lacking in necessary compute and infrastructure in order to scale their AI systems.

Nevertheless, a portion of those that try, even to build makeshift systems, already reap the benefits of higher risk-adjusted returns than their counterparts. Getting the basic parts right, avoiding major implementation pitfalls, and building layer by layer already delivers improved metrics across strategies and is a workable approach. There’s also a myriad of problems AI can be applied to in terms of trading – including but not limited to portfolio optimization, directional prediction, factor/feature selection, algo execution, deep hedging, market making, and more.

2. Given your experience in hedge fund infrastructure and federated learning, what’s the most viable path toward building decentralized or verifiable AI systems—and what gaps still need to be filled?

Currently, the words “hedge fund AI infrastructure” and “decentralized AI” are from all angles an oxymoron. Hedge funds build their AI infrastructure as a long-term moat and IP, nothing part of that can or should be decentralized.

3. Some believe blockchain networks can help enforce traceability in AI systems. Do you see Ethereum or other L1s playing a meaningful role in enabling auditability or accountability for enterprise AI?

I can see this as being viable – but how much traction it would get depends on actual adoption at scale. And when we’re talking about scale, speed, and performance is a large issue, especially with large amounts of granular data (if we are talking about for trading). It may have a role in logging the execution of trades and their details as well as explainability, but I don’t see the edge over a relational database, especially because such information should be private unless it’s at a compliance level.

4. You’ve built privacy-preserving compute systems and advised federated learning efforts. What’s your take on how crypto could change the economics of data ownership and compensation in the AI ecosystem?

Federated learning is an innovative and very useful technology, and in a way, can enable decentralized data ownership while enabling collaborative training of AI models. In reality, federated learning does not have to involve blockchain in any manner or fashion – however, we have seen it implemented with blockchain technologies, but they are currently not often used at all.

5. At Tensor, you’re working with institutional asset managers. What are they actually asking for when it comes to deploying AI in capital markets—and where are they still cautious?

Yes, Tensor both conducts its own prop trading as well as provides AI solutions and systems for asset managers. Our hedge fund partners typically look for a few things: 1) custom-tailored near-full AI-based trading strategies, while allowing themselves to have control over factors such as risk management and position sizing. 2) diversified AI-driven alpha streams with low correlation with their existing strategies and funds that they can directly add in to increase risk-adjusted return, expanding their capacity as well as providing a supplementary edge.

There are things they are cautious about – and very rightly so. AI has its pitfalls in trading – if implemented at a subpar level and without rigorous stress-testing of various sorts, it can blow up in your face faster than typical alpha decay from a run-of-the-mill quant strategy. We many times refer to this as “model overfitting”, or fitting noise from historical data to the model that makes it perform well on simulations but can crash and burn catastrophically in practice. There are various ways to test and address this – one of which is explainability of the AI system, which means minimizing the black box aspect of AI trading and maximizing the explainable aspects. For example, if we can explain why the strategy consistently performs in various market regimes, then we can expect limited downside in similar situations.

Of course, there are proponents of “huge” black box models and maximum complexity – while I respect that, this is not our style and certainly not our partners’.

Part 2: Building Real AI Infrastructure for Hedge Funds

1. You’ve said that most hedge funds struggle to implement deep learning in a robust and scalable way. What are the key components of real-world AI infrastructure that work for trading?

In general, AI systems for trading are broken into five main components:

A) Data Layer. The data that you’re feeding in, and the evaluation of that pipeline on how it delivers alpha for the relevant AI models for trading. In many cases, feature (factor) selection can be automated using AI methods such as reinforcement learning.

B) Core Model Layer. These days, it is mostly deep learning of some sort or deep reinforcement learning, which may use many different models for different assets, or also can be one large model across assets (in the case of Transformer-based architecture). Depending on how the entire system is designed, there may also be models for other important market characteristics such as volatility, as well as models for different time horizons. Design of the model layer completely depends on the core competencies and strategies of the firm.

C) Peripheral Layer. Some firms may have sort of on top of the core model layer that provides auxiliary support to enhance a certain aspect of edge and performance, this may include something like market regime prediction models. Other firms try to implement everything at the core model layer and would not have this.

D) Optimizer Layer. It determines entry and exits, and position sizing logic – in other words, the “portfolio” aspect. This layer needs to be streamlined with the core model layer and the peripheral layer in order. Risk logic would be implemented at this layer as well.

E) Execution Layer. Depending on the strategies used, many firms do not have this, use more basic order execution methods, or rely on a 3rd party technology vendor. It’s understandable in that, in most cases, other than HFT, it is not the source of alpha and edge. We are talking about VWAP execution for the most part – if built in-house, a deep reinforcement learning-based stack is common.

2. Overfitting is a well-known risk in quant models, but it’s even more dangerous in AI-driven strategies. How do you stress test or design infrastructure to avoid catastrophic failures in live trading environments?

Overfitting is a complex topic and sometimes is hard to define – but in general, stress testing and a rigorous evaluation approach are completely necessary. However, for AI-based strategies, the structure and design of the system and models already determine the extent of overfitting risk. One simple aspect is that the more parameters the system has, the higher the risk. Daisy chaining multiple layers of models for generating a signal is also along the same vein and is generally discouraged. 

Also, we have to understand that model outputs, if directly used as signals, have a level of noise, so signal generation mechanisms relying on single-point-of-time signals vs. a cumulative approach may also be more prone to overfitting.

The flip side is, of course, explainability. If the system is complex and has a myriad of parameters but they are explainable to a level of trading logic given the market environment, as well as explainability is able to be observed consistently, then overfitting risk is reduced. Of course, certain explainable systems will experience quick alpha erosion with time, so an adaptive approach is necessary. 

When factors such as these are considered, then we consider how to design the various stress tests adequately. 

3. You’ve advised multiple funds on deploying AI systems. Where do you see the biggest infrastructure bottlenecks today: compute, data pipelines, explainability, or something else?

Interestingly, one of the biggest bottlenecks is none of these. It’s the management and leadership aspect, as well as long-term building of AI-based intellectual property. Many of our partners have the resources to easily solve compute, data, and talent, but are not able to build up their internal centralized IP (AI systems) consistently and in a steady progression. This is caused by certain factors including but not limited to 1) internal team structure 2) turnover and retention  3) hands-on level of management in building out AI capabilities  4) incentive alignment. 

It can be very difficult for the management to have an in-depth hands-on approach in many cases and drive quantitative research teams focused mainly on quick and agile deployment of strategies to be aligned with what is more of a long-term strategic endeavor. In many cases, the payoff in terms of dollar value returned takes at least a year or two.


This industry announcement article is for informational and educational purposes only and does not constitute financial or investment advice.