The End of Quarterly Reporting in the US? Here’s What That Means for Data-Driven Investing
A shift to semi-annual results could redefine transparency, widen inefficiencies, and accelerate demand for alternative data.
Reuters reports that SEC leadership plans to fast-track the proposal following the president’s call.
“The president’s call was timely, and so we are, you know, working to fast track it,” Atkins said, speaking to reporters at the US Securities and Exchange Commission headquarters on the sidelines of a joint roundtable with the Commodity Futures Trading Commission on policy harmonization. https://www.reuters.com/business/us-sec-chairman-atkins-vows-fast-track-scrapping-quarterly-corporate-reports-ft-2025-09-29/
Demand for alternative data1 will likely rise as formal reporting provides fewer signposts to know if an investor’s thesis is playing out as expected. Yet linking alt data to fundamental KPIs2 will become harder, shifting diligence from backtesting3 toward other validation methods. Fast-money investors will still favor high-frequency alternative data despite fewer potential signposts from reporting, while longer-term investors will rely more on structural signals.
Four observations support this view
Academic studies show that more frequent company results improve market efficiency (The degree to which market prices fully and quickly reflect all available information). Active investing depends on markets being inefficient in the short term and efficient in the long term. Less frequent reporting lengthens periods of inefficiency, creating more opportunity for alternative data to fill information gaps.
Semi-annual reporting exists elsewhere in the world. When I covered European beverages as a sell-side4 analyst, I worked within a semi-annual system and saw the pros and cons compared with the US quarterly cycle, especially in the context of data-driven investing.
Semi-annual reporting would reduce corporate transparency, but investors could offset this loss through trusted alternative datasets if those datasets meet high standards of data integrity. Data integrity has always mattered in alternative data, but a lower number of official data points makes backtesting harder. Other methods are needed to get comfortable and trust in alternative data. In the absence of quarterly results, reliable alternative data could evolve into new signposts for investors. Consider PMI5 and weekly mortgage applications as early examples of predictive datasets that became target variables investors now seek to estimate earlier and more accurately than consensus6.
Investors play the players as much as they play the accuracy of estimates. For proven accurate early indicators that have become a signpost for investors, investors will look to get an edge ahead of that potential catalyst. Core fundamental datasets like transactions, web traffic, app usage, and foot traffic have the similar “play the players” dynamics building. The outcome will be the need for even more frequent alternative data, progressing from monthly to weekly, daily, and even intraday signals for these tried-and-true dataset types.
Taken together, these forces point to the same outcome: less frequent reporting widens short-term inefficiencies while elevating the value and responsibility of trusted data to preserve transparency.
For trusted large-cap companies, investor relations teams and executives understand that maintaining quarterly reporting supports valuation and investor confidence. Lack of trust erodes valuation premium (based on academic research, there is a higher valuation investors assign to companies that provide consistent and credible information). If this group of investor relations teams continues to provide transparency, it would be the status quo regardless of government policy. And in that scenario, market forces would favor the transparent over those who do the bare minimum. Even if many firms maintain voluntary transparency, investors will rely increasingly on high-integrity alternative datasets to validate those disclosures and fill gaps where others retreat from frequent reporting.
Welcome to the Data Score newsletter, composed by DataChorus LLC. This newsletter is your source for insights into data-driven decision-making. Whether you’re an insight seeker, a unique data company, a software-as-a-service provider, or an investor, this newsletter is for you. I’m Jason DeRise, a seasoned expert in the field of data-driven insights. I was at the forefront of pioneering new ways to generate actionable insights from alternative data. Before that, I successfully built a sell-side equity research franchise based on proprietary data and non-consensus insights. I remain active in the intersection of data, technology, and financial insights. Through my extensive experience as a purchaser and creator of data, I have a unique perspective, which I am sharing through the newsletter.
1. Academic studies on the impact of less frequent reporting: less efficient markets is a net negative on the governance scale, but there are more opportunities for Alpha from dislocations
Academic studies show quarterly reporting is good for market efficiency at the individual company level. Quarterly reporting generally improves market efficiency by reducing information asymmetry7, lowering the cost of capital, and improving price informativeness. While some small firms may benefit from avoiding compliance costs by reporting semi-annually, the bulk of peer-reviewed evidence shows that investors, analysts, and markets value more frequent disclosure.
Fu, Kraft & Zhang (2012), “Financial Reporting Frequency, Information Asymmetry, and the Cost of Equity”
Methodology: Hand-collected reporting frequency data (1951–1973), OLS, fixed effects, and 2SLS to handle endogeneity.
Controls: Firm size, industry, effects of voluntary versus mandatory changes, matching to control firms.
Key Findings: Increased reporting frequency reduces a price impact measure of information asymmetry by ~0.216% and lowers cost of equity (CAPM) by ~0.628%.
Nallareddy, Pozen & Rajgopal (2021), “Consequences of More Frequent Reporting: The U.K. Experience”
Methodology: Difference-in-differences around the 2007 mandatory quarterly reporting imposition and 2014 repeal; panel regressions.
Controls: Firm size, industry, macro factors, and time effects.
Key Findings: No significant impact on investment (CapEx, R&D) from quarterly reporting, but analyst coverage rose and forecast accuracy improved. When firms voluntarily reverted to semi-annual after 2014, they experienced declines in analyst following.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2817120 (SSRN)
Haga et al. (2022, Europe), “Peer firms’ reporting frequency and stock price synchronicity: European evidence”
Methodology: Using a sample of 33,338 European firm-year observations from 2004 to 2017, we find a significantly negative relationship between stock price synchronicity and concentration of quarterly reporting among a firm’s peers.
Controls: Size, opacity, ownership
Key Findings: Peer quarterly reporting improved stock price informativeness compared to semi-annual reporters. They also note the impact of more frequent reporters in the industry on semi-annual reporting companies.
https://www.sciencedirect.com/science/article/pii/S106195182200060X
Kajüter et al. (2019, Singapore): The Effect of Mandatory Quarterly Reporting on Firm Value
Methodology: Regression discontinuity around size cutoff
Controls: Size, liquidity8, peers
Key Findings: Mandatory quarterly reporting reduced small-firm valuations by ~5%.
Ultimately, for discretionary investors to generate alpha9, one needs to believe that the markets can be inefficient in the near term but efficient in the long term. I believe liquid financial markets function fine without quarterly reporting, as seen in other well-established financial markets with semi-annual reporting.
However, there can be market dislocations in individual names, where a temporary mismatch between asset prices and underlying fundamentals exists. This is where alpha opportunities are. While there can be dislocations in individual names where information asymmetry is building, the overall markets can function efficiently.
While one would think regulators would want to drive market efficiency higher, savvy investors could benefit from the greater dislocations that emerge when reporting intervals lengthen.
2. An American in London: Observations from abroad show the differences can create opportunities for faster signals to fill the gaps
I covered the European beverages sector as a sell-side analyst while living in London from 2006 to 2011. Trained in US markets, I initially assumed all global firms reported quarterly results but quickly learned that this is not the norm. Despite the minimum requirement of semi-annual reporting, some of the European companies reported semi-annual full financial statements while providing shorter “trading updates” in the first and third quarters (a brief company communication between major reporting periods, often sharing revenue or key metrics without full financial statements). Some European companies voluntarily issued full quarterly results, typically to attract US institutional investors accustomed to that cadence.
European long-only and hedge fund10 clients focused more on structural themes, while US clients concentrated on near-term earnings trajectories. That’s not to say there were not hedge funds trading around the quarter in Europe, or US investors taking the longer-term view. Qualitatively, I found my clients asked different types of questions aligned with these generalizations.
Covering semi-annual reporters often created alpha opportunities for analysts who combined faster-moving signals from peers, early alternative datasets, and management access to anticipate consensus resets at interim results. “Reading across” from various industry participant results, leveraging early forms of alternative data and industry data, and consistent access to management teams helped generate enough data points to more accurately estimate the fundamentals of the industry than my competitors on the sell-side. The best analysts looked for shorter-term signals and signposts rather than waiting for the semi-annual signals that everyone received at the same time.
These read-across signals illustrate how, in the absence of frequent company reporting, trusted data from peers and industries can collectively preserve transparency in the market.
It’s important to state something that may not be immediately obvious to fast money, US institutional investors who haven’t considered what investing in a world where there is not quarterly reporting: Quants11 and hedge funds have thrived in Europe despite fewer formal earnings checkpoints, proving that less frequent reporting does not hinder fast-money strategies, so long as they have a data edge.
At a high level, analysts on the buy-side and the sell-side in Europe compared to the US benefit from:
Analysts spend less time churning through quarterly preview/review cycles and more time developing differentiated ideas, both near-term catalysts and longer-horizon themes.
Space for a longer-term perspective, because the points of validation for the market come less frequently, but those signposts carry more weight when they arrive, so the near term can’t be ignored completely. Fast institutional investors still compete in Europe, and semi-annual results serve as catalysts that validate or challenge non-consensus views.
Interim periods have lighter planned trading updates focused on key performance indicators without the full financial statements.
And some negatives in Europe compared to the US:
The trade-off is a higher risk of mid-period surprises when performance diverges significantly from consensus, forcing companies to release unscheduled updates. Quarterly earnings mean there’s less room for major surprises between results. The slower frequency of official results actually creates the opportunities for faster-money investors like hedge funds, if they are able to leverage data more effectively between results.
Ironically, fundamental long-only investors may find more dislocations in the US, where crowding around quarterly earnings creates volatility that long-term investors can exploit, so long as they can withstand near-term pain. In Europe, because the signposts are less frequent, there’s more consensus building around longer-term theses, which makes longer-term alpha a bit harder to achieve.
3. Implication for data companies: Even more importance on Data integrity
I have written about the importance of data integrity as a critical factor in the due diligence process in a few articles.
When data products are implemented in the investment decision process, the users want to know that the methodology is sound, internally consistent and compliant. They want to know that the signals from the data, when used appropriately, will be helpful in refining their investment decision to generate alpha. When the data leads to an incorrect investment decision, they would want to confirm that the issue was not because of an error in processing the data or that the data was misrepresented to reflect a real-world action, behavior or intention. The various personas will start from a base-level assumption about the data in question (”I trust” or “I do not trust” as a starting point in the due diligence). At the end of the due diligence process, the purchaser must believe the data is trustworthy and can add value to the investment process.
To achieve high data integrity, consider the 8-point due diligence checklists previously published:
Use a completed Due Diligence Questionnaire (DDQ)12 to understand the compliance and risk associated with a dataset.
Assess the Return on Investment (ROI) by considering how many decisions can be influenced and the potential limitations of the data.
Conduct common sense, first-principles tests to ensure the data behaves as expected and reflects known events and expected seasonality. It’s surprising how often these types of tests are failed.
Perform back testing against benchmarks to measure the dataset’s correlation with a known KPI while avoiding common statistical mistakes that lead to incorrect conclusions.
Assess the transparency of the methodology used for harvesting, cleansing, and enriching the data, while respecting proprietary trade secrets.
Evaluate how the data vendor handles feedback and whether they have the capacity for custom work, understanding the potential implications on competitive advantage.
Understand the vendor’s competitive set by asking about their closest competitors and their target customer base.
Examine the Service Level Agreement (SLA) for post-delivery service, including response times for errors, communication of code-breaking changes, and availability of sales engineering support.
Of these 8, the backtest is the most common approach to testing datasets: correlate the historic data with a known KPI that’s valuable for understanding business fundamentals.
What happens if backtesting is harder to do?
Critics may argue that fewer KPI observations make backtesting unreliable, but this only heightens the importance of rigorous data integrity and common-sense validation.
Fewer reported results per year means fewer target KPIs to test predictive signals against. It is possible companies will continue to provide Q1 and Q3 trading updates like in Europe, which would give insights into key KPIs (typically around revenues, but less insight on margins or balance sheet items). Nevertheless, those trading updates may be less consistent in reporting standards with the historical quarterly results, which can affect the time series and backtesting analysis. This raises the risk that backtesting will become less reliable.
A minimum threshold of data is required to build valid in-sample, development, and out-of-sample datasets for backtesting inclusion in a quant model. This is related to frequency and coverage.
Consider a company that shifts from quarterly to semi-annual reporting. Over 10 years, it would produce 20 data points instead of 40, cutting in half the available in-sample, development, and test observations. If we set aside 60% of the target observations for the in-sample population, we’re building a model on 12 target observations, with just 4 in the dev sample and 4 in the test sample. Building a robust company-specific model becomes much harder with so few data points.
Of course, if the alternative data can cover hundreds or thousands of companies, then the sample size can be large enough to backtest.
Existing relationships observed under quarterly reporting could be assumed to persist, provided the underlying economic and business structures remain stable. Major shifts in the structure of an economy, industry, or company could break the old relationships, which would hurt the ability to trust the backtestalone.
Common sense testing becomes more important:
With less ability to statistically backtest data, the common sense checks become more important in the mix of gaining trust in the data as a signal. I explored this topic in a report earlier this year.
To trust backtests, the underlying raw data must first make logical sense. Investors know correlation doesn’t mean causation. A key defense is being able to logically connect alternative data to real-world activities and then connect those dots to the KPIs that matter to security valuation.
Common sense data checks help connect the data vendor’s methodology and output logically. Well-documented consumer and business behaviors serve as ground truth, ensuring data is logically connected to reality. A dataset that backtests well but the underlying data is hard to connect back to reality triggers red flags for investors who have been burned by the adage that correlation is not causation.
Longer-term use cases may become more important, which requires different tests of the data to assess data integrity
When thinking about the long term, the key drivers change from just thinking about the closest matching indicator to the KPI based on back tests. For example, knowing the credit card transactions for a retailer is a great proxy for the upcoming quarterly reported revenue results, but it doesn’t help as much with what the revenue will be in a year from now.
Understanding the underlying drivers of a KPI, like revenue, will provide better signals for the long term. I discussed a different approach to estimating revenues using multiple datasets and creating underlying indicators based on the customer acquisition funnel.
Alternative data with highest integrity become trusted signposts
Over many decades, there have been data time series that have become widely used in understanding the economy, industries, and companies well before official statistics are made available. Consider PMI and mortgage applications and reports as early forms of predictive data now commonly used as leading indicators of GDP and housing demand, respectively. These figures have become so important and trusted that they have become the target variable that institutional investors use other datasets to try to predict.
These signposts may create new opportunities for alternative data to be backtested against, filling the gap of quarterly reported KPIs. Credit card data, web traffic, foot traffic, and app usage data are widely used datasets because of prior backtesting and positive alpha-generating use. Providers could go on to establish highly trusted signposts that all investors have to have to understand the market moves in a semi-annual reporting scenario.
4. Investors play the players in addition to playing the forecasting game, which means proven data can become the benchmark, and there is a need for even faster signals
Credit card datasets have been used for more than a decade to get an edge on quarterly consumer company results. But the players not only play the prediction game, they play each other.
As a thought exercise you can try with a large group, you tell the group they will all make an estimate between 0 and 100. The correct answer will be 2/3 of the group average. I’ve run this multiple times in training sessions to help explain the “playing the players” part of the market. In this case I’ve given you the exact algorithm that will generate the right answer. However, you don’t know what the others in the room will do. Surely, someone will not pay attention and answer a number higher than 67 (if you honestly thought everyone else would estimate the max of 100, the highest 2/3 of the average would be 67). Others will give the answer of 1 or 0 because they recognize the recursive nature of the algorithm. They will be frustrated with the answer being somewhere above 10 and below 50 (the specific number is different every time I’ve run it, but typically in that wide range). The key is figuring out how many times the others in the room will take 2/3 of the number they came up with. You want to run the 2/3 algorithm one more time than the average of the group does. This is the essence of “playing the players.”
The same dynamic applies in markets. Once an indicator becomes a trusted signpost, investors compete to act ahead of that expected catalyst.
It leads to a demand for increased frequency of alternative data that’s proven to be trusted at predicting results. What was once a quarterly data point became a monthly, weekly, or daily data point, and in some cases, intraday data point.
This works because slower-moving signals enter the market through trades that gradually adjust fund positioning.
Possessing an early but incomplete indicator allows institutional investors to act ahead of others—buying before the crowd and selling into the news of the monthly release, often well before quarterly results.
Core fundamental datasets like transactions, web traffic, app usage, and foot traffic have the similar “play the players” dynamics building.
The ongoing trend toward more frequent data continues to reveal the “playing the players” game within the markets.
Conclusions
If the US moves to semi-annual reporting, investors will need to rely more heavily on alternative data to bridge the wider gaps between company disclosures. Longer periods between official results would extend short-term inefficiencies in the market, creating both risks and opportunities for those equipped to analyze data effectively.
Whether transparency improves or deteriorates will depend on the integrity of the alternative data ecosystem. If datasets remain accurate, consistent, and well documented, they could become the backbone of market transparency, serving as new signposts that fill the space once occupied by quarterly reports.
Investors with advanced data capabilities will continue the trend toward faster signals, moving from monthly to weekly, daily, and even intraday insights. The competitive edge will come from combining high-frequency indicators with a deeper understanding of underlying business drivers, rather than just chasing near-term KPI correlations.
For financial professionals, data teams, and data providers, the path forward is clear: diversify data sources, elevate validation standards beyond traditional backtesting, and prepare to operate in an environment where market inefficiency becomes both a challenge and an opportunity.
- Jason DeRise, CFA
Alternative data: Alternative data refers to data that is not traditional or conventional in the context of the finance and investing industries. Traditional data often includes factors like share prices, a company’s earnings, valuation ratios, and other widely available financial data. Alternative data can include anything from transaction data, social media data, web traffic data, web mined data, satellite images, and more. This data is typically unstructured and requires more advanced data engineering and science skills to generate insights.
Key Performance Indicators (KPIs): These are quantifiable measures used to evaluate the success of an organization, employee, etc. in meeting objectives for performance.
Backtesting: The process of testing a trading strategy or model using historical data to evaluate its performance before applying it in production. The goal is to determine how well the model would have performed in the past and, by extension, how it might perform in the future.
Buy-side typically refers to institutional investors (Hedge funds, mutual funds, etc.) who invest large amounts of capital, and Sell-side typically refers to investment banking and research firms that provide execution and advisory services (research reports, investment recommendations, and financial analyses) to institutional investors.
PMI (Purchasing Managers’ Index): A survey-based indicator of economic activity in manufacturing and services sectors.
Consensus: “The consensus” is the average view of the sell-side for a specific financial measure. Typically, it refers to revenue or earnings per share (EPS), but it can be any financial measure. It is used as a benchmark for what is currently factored into the share price and for assessing if new results or news are better or worse than expected. However, it is important to know that sometimes there’s an unstated buyside consensus that would be the better benchmark for expectations.
Information asymmetry: A situation in which one party, typically company insiders, possesses more information than others, such as investors.
Liquidity: The ease with which an asset can be bought or sold without affecting its price materially.
Alpha: A term used in finance to describe an investment strategy’s ability to beat the market or generate excess returns. A simple way to think about alpha is that it’s a measure of the outperformance of a portfolio compared to a pre-defined benchmark for performance. Investopedia has a lot more detail https://www.investopedia.com/terms/a/alpha.asp
Long/Short Equity Hedge Fund: Long/Short Equity funds buy positions (long) in stocks they believe will go up in value and sell short stocks (short) that they believe will go down in value. Typically, there is a risk management overlay that pairs the long and short positions to be “market neutral,” meaning it doesn’t matter if the market goes up or down; what matters is that the long position outperforms the short position. Short selling, by a simplistic definition, is when an investor borrows stock from an investor who owns it and then sells the stock. The short seller will eventually need to buy back the stock at a later date to return it to the owner of the stock (and will profit if they buy back the stock at a lower price than they sell it).
Quant funds: Short for “quantitative funds,” also referred to as systematic funds. Systematic refers to a quantitative (quant) approach to portfolio allocation based on advanced statistical models and machine learning (with varying degrees of human involvement “in the loop” or “on the loop” managing the programmatic decision-making).
DDQ (Due Diligence Questionnaire): A standardized document used by data buyers (especially institutional investors) to assess a vendor’s compliance, privacy practices, methodology, and business model.







