When to Scale or Keep Iterating
Unlocking Product/Market Fit: The Art of Balancing Scale and Iteration in the Alternative Data Sector
Welcome to the Data Score newsletter, your go-to source for insights into the world of data-driven decision-making. Whether you're an insight seeker, a unique data company, a software-as-a-service provider, or an investor, this newsletter is for you. I'm Jason DeRise, a seasoned expert in the field of alternative data insights. As one of the first 10 members of UBS Evidence Lab, I was at the forefront of pioneering new ways to generate actionable insights from data. Before that, I successfully built a sell-side equity research franchise based on proprietary data and non-consensus insights. Through my extensive experience as a purchaser and creator of data, I have gained a unique perspective that allows me to collaborate with end-users to generate meaningful insights.
Product/market Fit1 is elusive in the alternative data sector. Prior entries in The Data Score newsletter explored the idea of working backward from outcomes needed by financial users of alternative data and designing the product to meet those needs.
Spending resources to scale and automate a data product that does not show product/market fit is way to build unwanted tech debt, which can hold back the ability to evolve the product along the roadmap to meet more client outcomes.
In the beginning, the approach is handcrafted
At the time, I felt like “No one else is crazy enough to do all this manual work to get insights from data. They must be doing it with lots of technology.” But, I would later learn that this handcrafted work is a rite of passage in the scaling process.
As part of the Evidence Lab origin story, the first projects taken on in 2014 and 2015 by myself and my colleagues were not built as scalable solutions. We were looking to prove the concept of a centralized alternative data team as a critical driver of the sell-side research department’s revenue share. The handcrafted work on these early projects allowed us to find our product/market fit with our primary client, the UBS sell-side research analyst2. At the time, I felt like “no one else is crazy enough to do all this manual work to get insights from data. They must be doing it with lots of technology.” But, I would later learn that this handcrafted work is a rite of passage in the scaling process.
In May 2017, one of my all time favorite podcasts launched: https://mastersofscale.com/ which tells the story of how products and services rapidly scale. But, the first episode started in a different place.
“Do things that don’t scale” episode from Masters of Scale: https://mastersofscale.com/brian-chesky/ This is the very first episode of the podcast series, where Reid Hoffman interviews Airbnb’s Brian Chesky. From the podcast episode summary: " If you want your company to truly scale, you first have to do things that don’t scale. Handcraft the core experience. Serve your customers one by one, until you know exactly what they want. That’s what Brian Chesky did in the early days as co-founder and CEO of Airbnb. He shares their route to crafting what he calls an ‘11-star experience.’”
There are so many great concepts and quotable moments in the podcast. Here are just a few:
HOFFMAN: Build by hand until you can’t…. here’s the next thing to notice: they didn’t launch perfectly scaled services. They built everything by hand.
CHESKY: We had a saying that you would do everything by hand until it was painful. So Joe and I would photograph homes until it was painful, then we get other photographers. Then we’d manage them with spreadsheets until it was painful. Then we got an intern…. And then we’d automate the tools to make her more efficient…. Eventually a system does everything. We built a system where now the host comes, they press a button, it alerts our system which goes to a dispatch of photographers, so it’s all managed through technology. They get the job, they market through an app that we built, and then payment happens. The whole thing is automated now.
HOFFMAN: Note how they gradually worked out a solution. They didn’t guess at what users wanted. They reacted to what users asked for. Then they met the demand through a piecemeal process. And here we come to the true art of doing things that don’t scale. It’s not just a crude way of succeeding on a shoestring budget. It also gives your team the inspiration and urgency to build the features that users really want…
HOFFMAN: …Now it’s common for entrepreneurs to swap stories like this. And I think it’s worth dwelling on these early days of handcrafted work, because most entrepreneurs tend to have a funny reaction to these experiences. They may laugh about it later. They may call the work unglamorous. They may celebrate the day they could hire a helping hand or automate these chores out of existence. But thoughtful founders will never say, “What a complete waste of time.” They’ll often look back on this period as one of the most creative phases of their careers.
How will you know if product/market fit is achieved?
Some of the best thinking on product/market fit can be found in one of the most popular newsletters on Substack, “Lenny’s Newsletter,” where he consolidated his research on the topic into one post:
Retention: Users stick around
Surveys: Users say they’d be very disappointed if your product went away
Exponential organic growth
Cost-efficient growth
CAC < LTV3
Customers clamor for your product
People are using it even when it’s broken
I will leave it for future newsletters to explain in detail how to apply each of these approaches in the context of alternative data products.
When to iterate?
Adoption of data products effectively requires your customers to change how they work to generate insights on an ongoing basis.
As handcrafted products are made available to customers, the feedback loop is critical. Adoption of data products effectively requires your customers to change how they work to generate insights on an ongoing basis. The data product has to solve a problem for the client in such a meaningful way that they are willing to change how they work, or the data product has to fit seamlessly into the current workflow. A product with feedback that is just “interesting” is not good enough. Even “nice to have” is not good enough. Only feedback that the feature or product is “critical” and “can’t live without” is acceptable feedback showing product market fit. Spend time with the users and find out what’s “just interesting”, but not useful. Go back to the product and iterate quickly. It doesn’t even need to be a live working version, a wireframe4 can be effective enough to show a new feature to get critical feedback… “Would this be useful if we did this?” “How would you use it?” “What would make it even better?” Going back to the Masters of Scale Brian Chesky episode… “What would make this an 11 star product?” Make those changes and get feedback again. Don’t overengineer this process of building and getting feedback.
If this process is not revealing a path to “critical,” “can’t live without” for a specific feature or product, it is time to pivot to other features and products that have better market fit.
Is iterating to find product market fit like being stuck in a roundabout? “Kids! Big Ben, Parliament, again.”
An example of finding product/market fit
One of the multiple leadership roles I had during the buildout of Evidence Lab was the head of the web-mined pricing and demand product area. We used advanced web mining techniques to monitor product pricing, inventory, and demand. The insight behind the product was that we could reverse engineer corporate strategy and execution by systematically harvesting available products and services from the web on a frequent basis and turning the data into fundamental metrics aligned with how each business and industry makes decisions. Ultimately, the price of goods and services is how a business generates its cash flow. Changes in the price and customers’ willingness to pay reveal deep fundamental insights as well as surprising near-term inflection points.
But how would that ability manifest into a product? In the early days, I worked directly with analysts in every sector globally where the investment debate could be addressed through the pricing and demand product areas. This was a very bespoke process where each analyst would have a nuanced approach to addressing the investment debate in the market, even though multiple analysts were working on nearly identical questions for their sector and geography. The work for each of them was highly customized.
Product/Market Fit Sign #1
As each custom analysis was created and delivered, I measured the impact. I would track how the specific features we built were being used to make investment recommendations. Some of the features were used once, while others were used repeatedly.
Lenny’s Newsletter sourced a great chart from Brian Balfour: https://brianbalfour.com/essays/product-market-fit
This is different than simply monitoring total usage. It’s important to convert trial to repeat usage. The features repeatedly used from each delivered dataset had product/market fit. By monitoring repeat usage of each feature, we had the data to decide which features to scale up and which to cut.
The above chart would be applied to each product and relevant customer cohort5. In the made-up example below, the first feature and client cohort combination resulted in poor retention rates. Understand why the clients are not returning. What would make it more useful? Make the change and test with a new cohort, and continue to iterate until retention is materially high (e.g. 40–50% per cohort).
Once it’s likely that product/market fit is achieved, I would also work to understand why the feature and product were repeatedly used. I don’t want to assume I know. “Are we still on the same page about why it is valuable and what is being done with the data product?”
We found that the metrics in the product were catching inflection points before results were released, helping the analysts get on the right side of the investment recommendation. That’s right in line with the expected job to be done by the data product.
Product/Market Fit Sign #6
When we found that the handcrafted work had a good product-market fit, we leveraged social proofing6 to grow awareness of the benefits of the product. FOMO7 is a powerful driver of new product trials.
“Founding a startup is deciding to take on the burden of Sisyphus: pushing a boulder up a hill.
Pushing a boulder: don’t have product/market fit. Chasing a boulder: have product/market fit. Both are very demanding, but feel totally different. If you’re still pushing the boulder, you don’t have it yet.”
https://twitter.com/eshear/status/1155180521485242368
Each analyst received a slightly different analysis, but soon we were hearing feedback with high urgency: “Why am I not getting what analyst X gets? When can I get that too?”
We went from pushing a boulder up the hill to chasing a boulder, keeping with Emmet Shear’s analogy.
Product/Market Fit Sign #7
People are using it even when it’s broken.
It’s safe to say that I didn’t get everything right the first time on the pricing data products. The dataset size of the product was huge, mainly because of the number of first, second, and third derivative metrics made available in the product, whose intention was to let the user go deep. However, in a brilliantly led focus group by my colleague, we uncovered that the depth of metrics was actually a blocker to insight discovery. Our retention metrics showed that clients valued the dataset; however, we inadvertently blocked additional users from going from trial to regular use because the product was too complex and over-the-top in scope.
While there were many clear points of feedback from the focus group, I think what resonated with me the most was “If the pricing data product was a movie star, who would it be?” The answer was Nicholas Cage!
Even though we were in production, at full scale, we went back to the iterating step and simplified the product based on our better understanding of “critical” vs. “nice to have” metrics. Together with our operations team, we worked through which metrics could be removed to increase ease of use while also reducing the resources needed to maintain the product. The end result was an easier-to-use product that reached higher levels of adoption and consumed fewer resources too.
You found product/market fit; now it’s time to scale
I will leave my thoughts on how to scale a data product for another Data Score Newsletter entry some time in the future. However, before scaling, work needs to be done to classify the process and features into two independent categories: (1) value-add and (2) automatable (not mutually exclusive).
Value-added processes and features: You need to properly understand what is driving the value of the product. You need to be sure what problems are being solved and how the data product fits into the client’s approach. Without that north star, the process of automating can risk losing the “so what?” In scaling the product, we need to watch for the inevitable, well-intentioned suggestion, “Wouldn’t this be more efficient if we cut step X?” when step X is actually the most important feature of the product. That does create a great learning opportunity for the team to understand why the feature is critical and not possible to remove.
Automatable: What are the steps in the process that are repeatable and follow a clear logic? What can be built as configurable parameters8? Where are the highest manual pain points? What is the level of accuracy needed in the process? Where do humans need to be in the loop or on the loop9 in the process? When should exceptions be triggered for humans to review?
- Jason DeRise, CFA
Product/market fit: This term refers to the point at which a product or service has been optimized to meet the needs and preferences of its target market, resulting in strong customer satisfaction and retention. Achieving product/market fit is considered essential for the success of a startup or new product.
Sell-side research department: Sell-side research departments are part of investment banks or brokerage firms that produce research reports, investment recommendations, and financial analyses for their clients, typically institutional investors.
CAC: Customer Acquisition Cost (CAC) is the total cost of acquiring a new customer, including marketing and sales expenses. It is an important metric for evaluating the efficiency of a company's customer acquisition efforts.
LTV: Lifetime Value (LTV) is a metric that represents the total net profit a company can expect to make from a customer throughout their entire relationship with the company. It helps businesses understand the long-term value of their customers and make informed decisions about customer acquisition and retention strategies.
Wireframe: A wireframe is a basic visual representation of a web page, app, or product layout, typically used during the design process to plan and communicate the structure and functionality of the product. Wireframes can be simple sketches or more detailed digital mockups.
Cohort: In the context of this article, a cohort refers to a group of users or customers that share a common characteristic, such as the time they started using a product or the type of product they use. Analyzing cohorts can help businesses understand user behavior, product adoption, and retention patterns.
Social Proofing: Social proof refers to the psychological phenomenon where people tend to follow the behavior of others, especially when they are unsure about how to act in a certain situation. In the context of this article, social proofing means demonstrating the value and success of the product by showcasing the positive experiences and endorsements from satisfied users, which in turn can help convince potential clients to adopt the product.
FOMO = Fear of Missing Out
Configurable parameters: In the context of this article, “configurable parameters” are aspects of a data product that can be easily adjusted or customized by users to meet their specific needs or preferences.
On the loop: This term refers to a situation where humans are monitoring an automated process and intervene only when necessary, as opposed to being directly involved in the process (i.e., "in the loop").