/

New Year, New Data Baseline

New Year, New Data Baseline

Company Thoughts

New Year, New Data Baseline

Jan 7, 2026

Black and white photo of new years celebrations in New York with fireworks in the city.

The Kickoff

The start of the year is often when teams formalise priorities that were discussed but not resolved during the previous cycle. Conversations around data, infrastructure and investment workflows remain active, as questions around how data foundations influence scale and how new inputs reshape research and execution move into sharper focus. Reflections on the year just passed create space for clearer thinking about what to build next. With that context, here is our first edition of 2026.

The Compass

Here's a rundown of what you can find in this edition:

  • Catching you up on what’s been happening on our side

  • Newest partner additions to the Quanted data lake

  • Insights from our chat with Evan Schnidman of Fidelity Labs

  • A deeper look into structural data challenges in fixed income

  • Highlights from recent shifts in the global macro regime

  • How to stop isolated workflows from generating inconsistent data

  • An enlightening piece on Bridgewater’s approach to alpha and beta

Insider Info

It has been a minute. 207 360 to be exact. We have been very heads down, which is the polite way of saying we disappeared into a product tunnel. The good news is that we came out the other side with a lot to show for it.

 Here is the quick rundown.

  • Launched the Quanted Query beta to select buy-side firms, allowing users to stress-test theses and research papers with ease by breaking them into testable hypotheses and applying our custom reasoning engine to link them to relevant features in our data lake. With this, the buy-side now has an empirical way to validate ideas, surface blind spots, and avoid wasted engineering cycles pre-trial. You can test the beta here.

  • Added refinements to the Quanted Data Bridge including UX/UI customisations requested by hedge fund design partners.

  • Rolled out our data onboarding agent, increasing our capacity to 4 datasets onboarded per week and paving the way to our goal of 5 datasets onboarded per day in Q1 of this year.

  • Hosted our inaugural buy-side lunch in NYC with Rebellion Research and Databento, bringing together leading quant practitioners from the city's top funds to discuss the changing landscape of quantitative finance and alpha discovery.

  • The Quanted team got together for our first offsite in Italy, including many late nights coding and strategy sessions which set most of the groundwork for the Q4 product push mentioned above.

  • Ashutosh Dave joined as our newest quant to expand R&D capacity and rigour. His 16 years of experience has been invaluable in making sure we deliver our latest products with real users and use cases at the forefront of the development process.

  • Expanded our GTM team with Juan Diego Franco Lopez joining as a partnerships associate, allowing us to scale our signing of the best data vendors into the Quanted platform for users to test against.

  • Caught up with many familiar faces and met some new ones at the NY Neudata Winter summit in December. 

We are starting the year’s first Tradar feature count at 4.5 million feature columns in the data lake, with 1,500+ unique feature transformations across our full dataset universe. The focus now is on executing early 2026 priorities, with three widely requested product additions in thesis validation, research paper replication, and a use case we are calling backtesting by analogy.

On the Radar

We have two new data partners to welcome this month, as we focus on getting recent additions fully onboarded and integrated into the system. Each one adds to the growing pool of features quants can test, validate, and integrate into their strategies. A warm welcome to the partners below: 

Yukka

Yukka has a 5-year technological lead in news derived event detection and sentiment scores using proprietary LLM and AI pipelines, turning over 2 million articles per day from 210k+ global sources into tradeable signals for stocks, indeces, and bonds. Our datasets are uncorrelated, non-standard,  independent from industry-crowded signals, and generate significant Alpha. We also offer 15+ years of historic data, cutting edge APIs, a visual cockpit for fundamental analysis, and customized datasets tailored to client needs.

Unacast

Unacast is the leading provider of global location intelligence, delivering cutting-edge analytics about human mobility in the physical world. Using state-of-the-art machine learning and industry expertise, Unacast provides high-quality, privacy-compliant human mobility datasets, APIs, and insights derived from cleaned and merged GPS and device signals. Our data enables quants to incorporate location intelligence into research and systematic models for consumer behavior, market activity, and real estate trends without the need to build in-house geospatial pipelines.

The Tradewinds

Expert Exchange

At the end of last year, we sat down with Evan Schnidman, Head of Fidelity Labs, to explore a career that has spanned academic research, early stage data innovation and large scale enterprise transformation. Evan began by developing a quantitative framework for analysing central bank communication during his PhD at Harvard. That research ultimately led him to found Prattle, one of the earliest companies to convert nuanced language into structured sentiment signals used by institutional investors. After Prattle was acquired by Liquidnet, he continued to lead data innovation and worked closely with buy-side teams and external vendors to integrate novel datasets into the investment process. 

He went on to advise more than three dozen startups across data, analytics and fintech. During that period, he also co founded MarketReader, helping design the company’s earliest product, and built Outrigger Group into a firm that provided fractional C suite support in data, AI, product development and commercial strategy for both fast growing startups and established enterprises. Now at Fidelity Labs, Evan oversees the incubation of new fintech businesses and observes how an enormously credible legacy institution navigates rapid technological change whilst building ventures that can scale independently. In our conversation we talk about how language based analytics have evolved since the early days of Prattle, the realities of building and scaling data products and how enterprise innovation is changing client relationships across financial markets.

What has building products across the broad range of early stage startups to institutional environments taught you about how organizations balance new data exploration with the reality of legacy workflows?

Legacy workflows are extremely difficult to disrupt and often exist for rational reasons, including risk and compliance controls. As much as the data innovator in me would love to see rapid adoption of new datasets and new data tools/technology, many organizations (especially those in regulated industries, like finance) simply cannot change process fast enough to keep up with rapidly proliferating data and AI tooling.

This slow pace of change is probably a good thing. Novel data and AI tech often change too quickly for large institutional adoption to be rational until the new technology is validated.

It is important to remember that most large organizations are making 3-5 year bets on technology tools. 3-5 years ago the data and AI landscape looked very different.

What feels most different today about how investors treat language-based or unstructured data compared to the early NLP era you helped shape?

Early NLP was basically good buzzword minus bad buzzword equals “score.” I joined the space at a time when a Bag of Words approach was slowly supplanting rudimentary counting, but we were a long way from modern NLP. The innovation that I helped contribute to the space was a focus on mathematics to unlock the dimensionality of language, showing it as more nuanced than positive/negative and thus able to correlate directly with financial outcomes. The reason I was able to make that contribution was domain expertise in economics.

The current era is going through a similar evolution. Early LLMs felt like buzzword-based approaches, while the fourth and fifth generation models feel more like Bag of Words. It is pretty apparent that that the next evolution of language models will leverage mathematics (in the form of graph RAG) and domain expertise to create small language models that are far more accurate for specific use cases.

Once this class of models is mature, investors may be able to trust not only data outputs, but wholesale agentic workflows.

Having worked on each side of the data relationship, how do you see the relationship between investors and data providers changing as the volume and complexity of available data grows? 

The challenge pure data providers face is one of basic arithmetic. The number of datasets available has proliferated much faster than data budgets have grown. Moreover, the number of data inputs to investment models has ballooned, so the data may be in higher demand than ever, but the unit economics has fundamentally changed.

Data providers can no longer survive on providing one or two high value datasets, they need a suite of offerings. That suite of offerings requires seamless delivery. A few years ago, that meant upgrading from FTP to API, now that means autonomous delivery via MCP servers.

This means data providers now need to offer more data products than ever before and need to engage in data engineering that allows them to make their data easier to consume than ever before. This data engineering work rapidly evolves into AI, specifically agentic workflows automating delivery of highly specialized data and insights.

Looking at the next decade of investment research, what types of structured or unstructured data do you suspect are still underexplored but likely to matter once firms can process them at scale?

The vast majority of the world’s data still sits in private hands. I expect we will see a massive wave of personalized AI tooling based on “your” data that allows investors to shortcut their normal processes and examine far more investment opportunities with some degree of depth, while still reflecting their unique screens and mental models.

This leveraging of private data is fantastic as a screening tool to reflect your own worldview, but in order to do complete investment research, one also needs to examine alternate perspectives. This diversity of perspectives is missing from current (generic) AI tools, but existing data can/should be used to train such systems over the next decade.

How does Fidelity Labs differ from a standard corporate venture capital (CVC) - what attributes makes it a stand out place to build a company?

Fidelity Labs is more like a corporate venture studio. We build businesses from scratch with the express purpose of building businesses than can either be the future of Fidelity or spin out and scale independently. 

Although we closely collaborate with investment teams and research divisions, Fidelity Labs is focused on the art and science of building brand new businesses. Most new businesses in fintech struggle with access to capital, forced short-term thinking and distribution. At Fidelity, we have a great deal of financial resources at our disposal, a very long time horizon and built-in distribution mechanisms. I can’t overstate how valuable those assets are. 

Numbers & Narratives

Fixed Income Data: The Prerequisite for Automation

The SIX survey confirms what operational data has shown for years: the primary structural instability in fixed income is not complex analytics or market pricing; it is reference data. Specifically, 41% of firms cite instrument definition as their most acute data challenge. This is reinforced by poor data quality (56%) and integration issues (47%) reported across the buy side. When terms and features vary across disparate sources, our risk, PnL, and performance systems inevitably interpret this divergence as noise, compromising signal clarity. 

This foundational inconsistency presents the major roadblock to efficiency. This is why only 31% of firms have achieved a largely automated state, with 56% remaining only partially automated. The data input dictates the constraint.

High-achieving teams already understand the solution. They operate under the premise that data control and harmonisation are the true fix. They prioritise accuracy, transparency, and traceability because the survey identifies these as the top provider requirements. Coverage only adds value when instrument identities remain consistent across ingestion. This is why 53% of firms favour API based delivery and 28% use cloud warehouse integration, since both support continuous validation rather than passive downstream consumption.

The data also clarifies where performance drift truly originates. A large share of model instability stems from structural inputs rather than behavioural changes. When issuer hierarchies, coupon terms, and call features shift between sources, exposure profiles move even when markets do not. Once these elements are harmonised and reconciled, risk and performance outputs stabilise, turnover falls for the right reasons, and automation becomes achievable at scale. The firms that enforce this consistency are the ones producing cleaner signals and fewer operational breaks.

Link to SIX's September Fixed Income Rapid Read

Time Markers

The First Stress Test of 2026

The 2026 macro consensus is beginning to meet its first real stress test, as markets shift from extrapolating AI driven earnings growth into pricing labor market softness, fiscal durability, and policy risk that were largely ignored in 2025. Entering the year, global growth expectations were resilient but increasingly fragmented, with trade frictions, higher structural costs, and uneven policy credibility embedding regional dispersion rather than a synchronized expansion path. The implication is that elevated AI linked equity multiples now coexist with private credit fragility, central bank independence risk, and sticky inflation, creating asymmetric downside even as headline growth remains intact.The violent tariff driven drawdowns and recoveries of 2025 showed that markets can reprice sharply ahead of earnings deterioration, favoring rotation toward balance-sheet strength and downstream AI adopters over pure infrastructure exposure. Recent geopolitical events such as Venezuela's leadership disruption have reinforced sectoral transmission channels, with defense and industrial equities reacting faster than oil or broad inflation measures. Against this increasingly fragmented backdrop, a portfolio framework focused on dispersion, selective real assets, and liquidity aware positioning is more robust than relying on directional macro conviction alone.

Navigational Nudges

If you look at how most investment firms evolve, the data often mirrors the organisation more than the market. One team owns trades, another owns risk, another owns pricing. Each part works locally, but cross strategy work exposes the gaps. That pattern is Conway’s Law at work, and you see it as soon as strategies need to share the same data.

The underlying culprit is the inconsistent data created by isolated workflows. A model that backtests cleanly in research shows slippage live because execution uses a different price stamp. Risk aggregates factor exposures on a different clock than the book. Finance books PnL on its own definitions. Nothing is broken, but the system never lines up in one frame. This affects scaling. Strategies with strong signal quality fail to scale because the underlying data cannot support uniform behaviour. You lose confidence in your own tools.

Here are simple steps that make all the difference:

  • Anchor everything to a single timeline

    Force all domains to use one event clock: trades, positions, pricing, corporate actions, funding. Without a unified time base, cross asset signals break.

  •  Create one canonical securities master

    No duplicates, one ID, one taxonomy, version controlled. Half of scaling issues in multi asset portfolios come from divergent identifiers.

  • Converge research and production on one feature store

    Do not allow local feature copies. Every factor, return series, and risk input must be generated from the same code path and metadata.

  •  Write data contracts for critical flows

    Execution, pricing, risk, and portfolio accounting must publish guaranteed schema, latency, and quality thresholds. If one breaks the contract, block downstream ingestion. 

  • Put core metric definitions directly in the codebase so every report uses the same logic.

    PnL, turnover, liquidity, exposures and similar measures should all come from one library used across every pipeline.

Good platforms grow from enforced alignment, not architecture diagrams. When these foundations are consistent, strategies scale cleanly and the system behaves like one mind instead of many.

The Knowledge Buffet

🔎 Bridgewater’s Alpha-Beta Framework: How Risk Parity and Portable Alpha Generate Returns 🔎

by Navnoor Bawa

This piece evaluates Bridgewater's approach to creating portable alpha and risk parity, and explains the impact of the regime shift in correlations on Bridgewater's 2022 drawdown as well as their approach to determining capacity and alpha delivery. . If you’ve been revisiting how much capital to put in these approaches after the last few years, this is definitely a useful read.

The Closing Bell

Where do quants go to multiply factors on New Year’s Eve?

Times Square.

The Kickoff

The start of the year is often when teams formalise priorities that were discussed but not resolved during the previous cycle. Conversations around data, infrastructure and investment workflows remain active, as questions around how data foundations influence scale and how new inputs reshape research and execution move into sharper focus. Reflections on the year just passed create space for clearer thinking about what to build next. With that context, here is our first edition of 2026.

The Compass

Here's a rundown of what you can find in this edition:

  • Catching you up on what’s been happening on our side

  • Newest partner additions to the Quanted data lake

  • Insights from our chat with Evan Schnidman of Fidelity Labs

  • A deeper look into structural data challenges in fixed income

  • Highlights from recent shifts in the global macro regime

  • How to stop isolated workflows from generating inconsistent data

  • An enlightening piece on Bridgewater’s approach to alpha and beta

Insider Info

It has been a minute. 207 360 to be exact. We have been very heads down, which is the polite way of saying we disappeared into a product tunnel. The good news is that we came out the other side with a lot to show for it.

 Here is the quick rundown.

  • Launched the Quanted Query beta to select buy-side firms, allowing users to stress-test theses and research papers with ease by breaking them into testable hypotheses and applying our custom reasoning engine to link them to relevant features in our data lake. With this, the buy-side now has an empirical way to validate ideas, surface blind spots, and avoid wasted engineering cycles pre-trial. You can test the beta here.

  • Added refinements to the Quanted Data Bridge including UX/UI customisations requested by hedge fund design partners.

  • Rolled out our data onboarding agent, increasing our capacity to 4 datasets onboarded per week and paving the way to our goal of 5 datasets onboarded per day in Q1 of this year.

  • Hosted our inaugural buy-side lunch in NYC with Rebellion Research and Databento, bringing together leading quant practitioners from the city's top funds to discuss the changing landscape of quantitative finance and alpha discovery.

  • The Quanted team got together for our first offsite in Italy, including many late nights coding and strategy sessions which set most of the groundwork for the Q4 product push mentioned above.

  • Ashutosh Dave joined as our newest quant to expand R&D capacity and rigour. His 16 years of experience has been invaluable in making sure we deliver our latest products with real users and use cases at the forefront of the development process.

  • Expanded our GTM team with Juan Diego Franco Lopez joining as a partnerships associate, allowing us to scale our signing of the best data vendors into the Quanted platform for users to test against.

  • Caught up with many familiar faces and met some new ones at the NY Neudata Winter summit in December. 

We are starting the year’s first Tradar feature count at 4.5 million feature columns in the data lake, with 1,500+ unique feature transformations across our full dataset universe. The focus now is on executing early 2026 priorities, with three widely requested product additions in thesis validation, research paper replication, and a use case we are calling backtesting by analogy.

On the Radar

We have two new data partners to welcome this month, as we focus on getting recent additions fully onboarded and integrated into the system. Each one adds to the growing pool of features quants can test, validate, and integrate into their strategies. A warm welcome to the partners below: 

Yukka

Yukka has a 5-year technological lead in news derived event detection and sentiment scores using proprietary LLM and AI pipelines, turning over 2 million articles per day from 210k+ global sources into tradeable signals for stocks, indeces, and bonds. Our datasets are uncorrelated, non-standard,  independent from industry-crowded signals, and generate significant Alpha. We also offer 15+ years of historic data, cutting edge APIs, a visual cockpit for fundamental analysis, and customized datasets tailored to client needs.

Unacast

Unacast is the leading provider of global location intelligence, delivering cutting-edge analytics about human mobility in the physical world. Using state-of-the-art machine learning and industry expertise, Unacast provides high-quality, privacy-compliant human mobility datasets, APIs, and insights derived from cleaned and merged GPS and device signals. Our data enables quants to incorporate location intelligence into research and systematic models for consumer behavior, market activity, and real estate trends without the need to build in-house geospatial pipelines.

The Tradewinds

Expert Exchange

At the end of last year, we sat down with Evan Schnidman, Head of Fidelity Labs, to explore a career that has spanned academic research, early stage data innovation and large scale enterprise transformation. Evan began by developing a quantitative framework for analysing central bank communication during his PhD at Harvard. That research ultimately led him to found Prattle, one of the earliest companies to convert nuanced language into structured sentiment signals used by institutional investors. After Prattle was acquired by Liquidnet, he continued to lead data innovation and worked closely with buy-side teams and external vendors to integrate novel datasets into the investment process. 

He went on to advise more than three dozen startups across data, analytics and fintech. During that period, he also co founded MarketReader, helping design the company’s earliest product, and built Outrigger Group into a firm that provided fractional C suite support in data, AI, product development and commercial strategy for both fast growing startups and established enterprises. Now at Fidelity Labs, Evan oversees the incubation of new fintech businesses and observes how an enormously credible legacy institution navigates rapid technological change whilst building ventures that can scale independently. In our conversation we talk about how language based analytics have evolved since the early days of Prattle, the realities of building and scaling data products and how enterprise innovation is changing client relationships across financial markets.

What has building products across the broad range of early stage startups to institutional environments taught you about how organizations balance new data exploration with the reality of legacy workflows?

Legacy workflows are extremely difficult to disrupt and often exist for rational reasons, including risk and compliance controls. As much as the data innovator in me would love to see rapid adoption of new datasets and new data tools/technology, many organizations (especially those in regulated industries, like finance) simply cannot change process fast enough to keep up with rapidly proliferating data and AI tooling.

This slow pace of change is probably a good thing. Novel data and AI tech often change too quickly for large institutional adoption to be rational until the new technology is validated.

It is important to remember that most large organizations are making 3-5 year bets on technology tools. 3-5 years ago the data and AI landscape looked very different.

What feels most different today about how investors treat language-based or unstructured data compared to the early NLP era you helped shape?

Early NLP was basically good buzzword minus bad buzzword equals “score.” I joined the space at a time when a Bag of Words approach was slowly supplanting rudimentary counting, but we were a long way from modern NLP. The innovation that I helped contribute to the space was a focus on mathematics to unlock the dimensionality of language, showing it as more nuanced than positive/negative and thus able to correlate directly with financial outcomes. The reason I was able to make that contribution was domain expertise in economics.

The current era is going through a similar evolution. Early LLMs felt like buzzword-based approaches, while the fourth and fifth generation models feel more like Bag of Words. It is pretty apparent that that the next evolution of language models will leverage mathematics (in the form of graph RAG) and domain expertise to create small language models that are far more accurate for specific use cases.

Once this class of models is mature, investors may be able to trust not only data outputs, but wholesale agentic workflows.

Having worked on each side of the data relationship, how do you see the relationship between investors and data providers changing as the volume and complexity of available data grows? 

The challenge pure data providers face is one of basic arithmetic. The number of datasets available has proliferated much faster than data budgets have grown. Moreover, the number of data inputs to investment models has ballooned, so the data may be in higher demand than ever, but the unit economics has fundamentally changed.

Data providers can no longer survive on providing one or two high value datasets, they need a suite of offerings. That suite of offerings requires seamless delivery. A few years ago, that meant upgrading from FTP to API, now that means autonomous delivery via MCP servers.

This means data providers now need to offer more data products than ever before and need to engage in data engineering that allows them to make their data easier to consume than ever before. This data engineering work rapidly evolves into AI, specifically agentic workflows automating delivery of highly specialized data and insights.

Looking at the next decade of investment research, what types of structured or unstructured data do you suspect are still underexplored but likely to matter once firms can process them at scale?

The vast majority of the world’s data still sits in private hands. I expect we will see a massive wave of personalized AI tooling based on “your” data that allows investors to shortcut their normal processes and examine far more investment opportunities with some degree of depth, while still reflecting their unique screens and mental models.

This leveraging of private data is fantastic as a screening tool to reflect your own worldview, but in order to do complete investment research, one also needs to examine alternate perspectives. This diversity of perspectives is missing from current (generic) AI tools, but existing data can/should be used to train such systems over the next decade.

How does Fidelity Labs differ from a standard corporate venture capital (CVC) - what attributes makes it a stand out place to build a company?

Fidelity Labs is more like a corporate venture studio. We build businesses from scratch with the express purpose of building businesses than can either be the future of Fidelity or spin out and scale independently. 

Although we closely collaborate with investment teams and research divisions, Fidelity Labs is focused on the art and science of building brand new businesses. Most new businesses in fintech struggle with access to capital, forced short-term thinking and distribution. At Fidelity, we have a great deal of financial resources at our disposal, a very long time horizon and built-in distribution mechanisms. I can’t overstate how valuable those assets are. 

Numbers & Narratives

Fixed Income Data: The Prerequisite for Automation

The SIX survey confirms what operational data has shown for years: the primary structural instability in fixed income is not complex analytics or market pricing; it is reference data. Specifically, 41% of firms cite instrument definition as their most acute data challenge. This is reinforced by poor data quality (56%) and integration issues (47%) reported across the buy side. When terms and features vary across disparate sources, our risk, PnL, and performance systems inevitably interpret this divergence as noise, compromising signal clarity. 

This foundational inconsistency presents the major roadblock to efficiency. This is why only 31% of firms have achieved a largely automated state, with 56% remaining only partially automated. The data input dictates the constraint.

High-achieving teams already understand the solution. They operate under the premise that data control and harmonisation are the true fix. They prioritise accuracy, transparency, and traceability because the survey identifies these as the top provider requirements. Coverage only adds value when instrument identities remain consistent across ingestion. This is why 53% of firms favour API based delivery and 28% use cloud warehouse integration, since both support continuous validation rather than passive downstream consumption.

The data also clarifies where performance drift truly originates. A large share of model instability stems from structural inputs rather than behavioural changes. When issuer hierarchies, coupon terms, and call features shift between sources, exposure profiles move even when markets do not. Once these elements are harmonised and reconciled, risk and performance outputs stabilise, turnover falls for the right reasons, and automation becomes achievable at scale. The firms that enforce this consistency are the ones producing cleaner signals and fewer operational breaks.

Link to SIX's September Fixed Income Rapid Read

Time Markers

The First Stress Test of 2026

The 2026 macro consensus is beginning to meet its first real stress test, as markets shift from extrapolating AI driven earnings growth into pricing labor market softness, fiscal durability, and policy risk that were largely ignored in 2025. Entering the year, global growth expectations were resilient but increasingly fragmented, with trade frictions, higher structural costs, and uneven policy credibility embedding regional dispersion rather than a synchronized expansion path. The implication is that elevated AI linked equity multiples now coexist with private credit fragility, central bank independence risk, and sticky inflation, creating asymmetric downside even as headline growth remains intact.The violent tariff driven drawdowns and recoveries of 2025 showed that markets can reprice sharply ahead of earnings deterioration, favoring rotation toward balance-sheet strength and downstream AI adopters over pure infrastructure exposure. Recent geopolitical events such as Venezuela's leadership disruption have reinforced sectoral transmission channels, with defense and industrial equities reacting faster than oil or broad inflation measures. Against this increasingly fragmented backdrop, a portfolio framework focused on dispersion, selective real assets, and liquidity aware positioning is more robust than relying on directional macro conviction alone.

Navigational Nudges

If you look at how most investment firms evolve, the data often mirrors the organisation more than the market. One team owns trades, another owns risk, another owns pricing. Each part works locally, but cross strategy work exposes the gaps. That pattern is Conway’s Law at work, and you see it as soon as strategies need to share the same data.

The underlying culprit is the inconsistent data created by isolated workflows. A model that backtests cleanly in research shows slippage live because execution uses a different price stamp. Risk aggregates factor exposures on a different clock than the book. Finance books PnL on its own definitions. Nothing is broken, but the system never lines up in one frame. This affects scaling. Strategies with strong signal quality fail to scale because the underlying data cannot support uniform behaviour. You lose confidence in your own tools.

Here are simple steps that make all the difference:

  • Anchor everything to a single timeline

    Force all domains to use one event clock: trades, positions, pricing, corporate actions, funding. Without a unified time base, cross asset signals break.

  •  Create one canonical securities master

    No duplicates, one ID, one taxonomy, version controlled. Half of scaling issues in multi asset portfolios come from divergent identifiers.

  • Converge research and production on one feature store

    Do not allow local feature copies. Every factor, return series, and risk input must be generated from the same code path and metadata.

  •  Write data contracts for critical flows

    Execution, pricing, risk, and portfolio accounting must publish guaranteed schema, latency, and quality thresholds. If one breaks the contract, block downstream ingestion. 

  • Put core metric definitions directly in the codebase so every report uses the same logic.

    PnL, turnover, liquidity, exposures and similar measures should all come from one library used across every pipeline.

Good platforms grow from enforced alignment, not architecture diagrams. When these foundations are consistent, strategies scale cleanly and the system behaves like one mind instead of many.

The Knowledge Buffet

🔎 Bridgewater’s Alpha-Beta Framework: How Risk Parity and Portable Alpha Generate Returns 🔎

by Navnoor Bawa

This piece evaluates Bridgewater's approach to creating portable alpha and risk parity, and explains the impact of the regime shift in correlations on Bridgewater's 2022 drawdown as well as their approach to determining capacity and alpha delivery. . If you’ve been revisiting how much capital to put in these approaches after the last few years, this is definitely a useful read.

The Closing Bell

Where do quants go to multiply factors on New Year’s Eve?

Times Square.

Keep up to date with Our Tradar Newsletter

Keep up to date with Our Tradar Newsletter

Gain valuable market insights, exclusive interviews & updates on
our technology delivered straight to your inbox!
Gain valuable market insights, exclusive interviews & updates on our technology delivered straight to your inbox!