How AI Systems Shape the Markets They Predict.
Written by Chicago Booth MBA Student Santiago Etchepare
- March 04, 2026
- Center for Applied Artificial Intelligence
Is it possible to build reliable AI for financial predictions? This, and many other permutations of this question around the deployment of AI, is lingering at the forefront of many Booth students’ minds, my own included. I recently moderated a conversation with Professor Bradford Levy at a Lunch & Learn conversation gathered by the Center for Applied AI and the AI Student Group, theme kept resurfacing: AI models do not exist in isolation. Once deployed, they become active contributors to institutions and markets. They influence capital allocation, pricing, communication, and risk taking. Their outputs shape human decisions, which then reshape the data the models learn from. That is the heart of the reliability question, and it is why this discussion matters.
It is tempting to evaluate an AI system using historical data and declare it successful because it outperforms older models. In finance, we rely heavily on backtests, meaning simulations that test a strategy against past market data to estimate how it would have performed. If a forecasting model achieves strong out of sample accuracy or produces attractive risk adjusted returns in simulation, we often treat it as validated. But this assumes that statistical patterns embedded in the past will persist in the future.
Markets are adaptive systems. A market signal, meaning a statistical relationship such as earnings momentum or sentiment trends, can lose its predictive power once widely known and exploited. This is what we mean when we say a signal decays. As more participants act on it, prices adjust faster and the opportunity shrinks. A model trained on yesterday’s structure can therefore fail quietly under tomorrow’s regime. Performance under stable historical conditions tells us little about resilience under structural change.
In my AI and Financial Information class, we build systems that ingest financial disclosures, generate embeddings, retrieve relevant context, and produce structured outputs for analysis. It becomes clear that a low error rate on a benchmark, meaning a standardized evaluation dataset used for comparison, does not guarantee real world reliability. Benchmarks rarely capture distribution shift, adversarial inputs, incomplete data, or incentive changes. The exercise reinforces a broader lesson: demonstration is not deployment.
Reliability must therefore include robustness to distribution shift and continuous recalibration. It must also include governance. If a forecasting system contributes to misallocation of capital, accountability cannot be an afterthought. Oversight and clearly defined responsibility are part of reliability itself.
There is also a deeper institutional tension. Our regulatory frameworks and governance systems were designed for a world in which decision-making authority was primarily human. As AI systems grow more autonomous, those institutions may struggle to adapt. The danger is subtle misalignment, where optimization toward narrow metrics erodes long term trust and stability. Reliability becomes a strategic choice, not just a technical achievement.
As advanced models become widely accessible, informational advantage compresses. Many firms will process filings, summarize disclosures, and generate forecasts with similar tools. The differentiator will not be technical sophistication alone but judgment. Leaders must define objectives carefully and design systems that balance efficiency with resilience. Extraordinary computational capability does not automatically produce wisdom.
We are in a transitional phase. The systems we are building are powerful and improving rapidly, yet our understanding of how to integrate them responsibly is still forming. Reliability is not a metric on a slide. It is a discipline that combines technical rigor, institutional design, and ethical reflection. If we recognize AI as concentrated decision-making power embedded in markets and institutions, we also accept the responsibility that comes with it.