AI Killed the Streaming Star and Investor Inboxes
Video killed the radio star. Streaming killed the video star. Will AI kill streaming next? An analogy for what is coming to data‑driven insights.
In June, a band no one had heard of released a debut album. Two weeks later they dropped another. By mid‑July they had three albums on Spotify, thirty‑nine songs in total, and more than one million monthly listeners. Their sound sits between Crosby, Stills, Nash & Young and Creedence Clearwater Revival and fits neatly into Spotify’s algorithmic playlists.
But here’s the strange part: the band doesn’t exist. The Velvet Sundown is one hundred percent AI‑generated, including songs, voices, and album art. Yet they reached more than one million monthly listeners on Spotify, some unaware and others unbothered.
What does this have to do with data-driven insights?
What is happening to music is a mirror for what’s about to happen to data-driven investing. Just as AI can flood Spotify with infinite songs, it can flood buy-side inboxes with research, analysis, and “insight.” Data companies and sell-side research analysts beware.
The scarce resource is no longer content. It is trust.
Listeners split: some let playlists set the vibe, while others search for musical connection. AI music can crowd out space for human made music.
Insight consumers will split the same way. AI will churn out consensus syntheses that are useful for context. Today’s AI still cannot match star human analysts at generating non‑consensus, accurate insights that create an edge. AI generated content could crowd out the space for real insights.
Welcome to the Data Score newsletter, composed by DataChorus LLC. This newsletter is your source for insights into data-driven decision-making. Whether you're an insight seeker, a unique data company, a software-as-a-service provider, or an investor, this newsletter is for you. I'm Jason DeRise, a seasoned expert in the field of data-driven insights. I was at the forefront of pioneering new ways to generate actionable insights from alternative data. Before that, I successfully built a sell-side equity research franchise based on proprietary data and non-consensus insights. I remain active in the intersection of data, technology, and financial insights. Through my extensive experience as a purchaser and creator of data, I have a unique perspective, which I am sharing through the newsletter.
The Velvet Sundown
The Velvet Sundown came out of nowhere and generated millions of streams of their music on Spotify; blending into Spotify-made playlists for listeners who like classic rock, especially Crosby, Stills, Nash & Young, CCR, and 60s rock.
The band released thirteen songs on a debut album on June 5. They released another album on June 20 with thirteen more. By the end of June they had nearly half a million listeners.
Skepticism surfaced about whether the project was real. A third album arrived on July 14 with thirteen more songs. In July the band had more than one million monthly listeners on Spotify.
In early July it was revealed that The Velvet Sundown is entirely AI‑generated. Here is the current Spotify description, which appears to be AI‑written:
🤖 The Velvet Sundown is a synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence.
This isn’t a trick - it’s a mirror. An ongoing artistic provocation designed to challenge the boundaries of authorship, identity, and the future of music itself in the age of AI.
All characters, stories, music, voices and lyrics are original creations generated with the assistance of artificial intelligence tools employed as creative instruments. Any resemblance to actual places, events or persons -living or deceased- is purely coincidental and unintentional.
Not quite human. Not quite machine. The Velvet Sundown lives somewhere in between.
If I’m casually listening, it definitely sounds like human-made music at first.
As an experiment, Rick Beato used Suno and Claude to create a fictional indie artist named Eli Mercer and a few songs with only a handful of prompts. He showed how easy it is.
It pains me to know it’s a very low-effort AI song, but that song could chart with marketing dollars behind it.
The human reaction to AI-generated music
For listeners who want background music, it may not matter. Even after the AI reveal, streams continued. The Velvet Sundown still has roughly five hundred thousand monthly listeners. Streaming platforms need content and need to fill custom playlists, so there is little economic deterrent.
If consumers do not care, an infinite number of AI songs can capture listening share and drive revenue to AI music generators. With playlists dominating streams, even mediocre AI music can take revenue share.
For listeners with discerning taste and judgment, a cohort may revolt against AI music and seek human‑created music at a premium. Music is about culture, belonging, and shared experience. People who love music want a connection to the art. Learning that a song is not human‑made can diminish that connection. It is unsettling that The Velvet Sundown or Eli Mercer can sound convincingly human‑made.
This is not just about music. Fully generated AI content now produces marketing materials, opinion pieces, and long‑form posts on platforms such as Substack. It is also being tested in financial markets.
Like The Velvet Sundown-filled playlists, AI-generated content is going to fill inboxes
AI will flood inboxes with research the way The Velvet Sundown flooded playlists. A half‑baked idea can become pages of content and an email blast with almost no effort.
Institutional investor inboxes were already crowded. The main barrier was manual effort. With that barrier gone, content flows faster than ever.
And of course, the defense against the low-effort “20-page AI pitch” is to put it back into an LLM and ask for the 200-character summary. The message is condensed, consumed, and judged, possibly by another AI.
Unlike art, data‑driven decisions are judged by outcomes. Music is subjective. In investing, if an insight is correct, does it matter whether AI wrote it?
I believe scarcity is shifting from content to trust, judgment, and originality, especially for those seeking an edge over consensus.
Isn’t yesterday’s AI just today’s automation?
Digitally enabled music has been around for decades. Recording is rarely done to tape. Instruments are tracked directly into digital platforms such as GarageBand, Logic, or Ableton Live, where imperfections are easy to fix. Autotune has corrected notes for decades. Music can be quantized, or snapped to a predefined rhythm, with a click. Yesterday’s AI “sorcery” becomes today’s automation. Autotune once felt like cheating; now it is assumed.
Music has been sampled, recombined, and reimagined for decades. Clever use of older songs, digital beats, new instruments, and vocals has produced massive hits. Daft Punk built a sound with technology and adopted robotic personas in interviews and live performances.
Audiences already accept technology in music creation. The latest AI tools are a logical extension of that shift. A few sentences of prompting can now generate a song that passes for human-made.
Therefore, there could be a market for AI‑generated music, and it could crowd out the economics of human‑created music.
Yesterday’s AI is today’s automation in data-driven insight content creation too
Technology in content creation is not new. Automated spelling and grammar checks are assumed. Templates and design suggestions speed production. Before GPT, early natural language generation tools produced paragraphs, and automation tools built charts.
Reasoning LLMs as a reliable replication of human thinking?
Iterating with LLMs unlocks more potential to approximate human thinking. But can the machines think for themselves yet? No. After asking an LLM to create something, users often request an explanation. That does not mean the explanation reflects how the model works. It is still pattern matching. Even models that show “thinking” often recycle outputs into a refinement loop that does not represent the internal process.
Ultimately, humans must decide whether the output makes sense. The human in the loop remains essential for trust.
Like exceptional musicians who create memorable music that connects with audiences in a way that machines currently cannot, the top-ranked analysts are able to generate novel insights that advance thinking beyond the consensus in a way that machines currently cannot.
Human reaction to AI-generated content and what it means for the world of data and investing: it’s about trust!
I would argue trust is the factor that matters when consuming insights, whether human‑ or machine‑generated.
As I showed when OpenAI’s Deep Research launched, it could replicate the average sell-side research analyst’s content. As impressive as that is, it was only generated through a consensus view by synthesizing and pattern matching to what previously existed. That’s quite valuable to generate a consensus view in a matter of moments using AI.
Can AI Match Sell-Side Analysts? Testing OpenAI’s Deep Research
Can AI match the human sell-side analysts output? I tested OpenAI’s Deep Research to find out. The result? AI can replicate "average" sell-side reports but lacks proprietary insights, making it more of a tool than a replacement. This experiment highlights AI’s strengths, its surprising mistakes, and what it means for the future of investment research.
To have an edge, investors need a view that is different from the crowd and ultimately correct. AI can replicate consensus almost instantly. It cannot yet generate the judgment, narrative, and conviction required to build that edge. Storytelling moves fundamental investors to buy or sell, not a model on its own.
That requires experience, judgment, and good taste. Humans should use AI to synthesize and explore information. More ideas can be tested. Each prompt and response mirrors the prompt and the training materials. Human judgment in crafting prompts and reinforcement learning from humans shape outputs to reflect the creator’s taste.
How could one tell if music is made by AI?
As the technology improves, it will become harder to tell whether music is human‑made or machine‑made.
The Switched on Pop podcast and Substack offered potential signs that a track is not human‑created, although those tells may fade as AI improves.
Is it new? Is it repackaged training material and pattern matching? Musicians also recombine influences into something new. I think humans are also fully capable of generating uninspiring art by copying what’s been done before. Some create work that feels truly new and special. I am not sure AI can do that yet.
Spotting AI-generated insight content: perception becomes reality
Like in the AI-generated music example, there are potential telling signs that the content was AI-generated.
Many readers scan inboxes, Substack, LinkedIn, and the wider web and wonder if a piece is AI‑generated. Some point to overused em dashes, “it is not A, it is B” constructions, the word “delve,” and spotless copy as signs. What about expert human writers who use the same devices? Is their work less valuable because it features strong grammar and spelling? Does a writer who loves to use em dashes get penalized as not human?
Even with a great idea, a massive document is only a few prompts away. Dozens of slides can appear in minutes. Are those insights valuable? Why would a reader not reduce it to two hundred characters to get the message? A common reaction is, “Why read five thousand words if the author did not put in the time to write it?”
Accurate or not, the search for valuable human‑generated content will force readers to judge whether a piece is AI‑generated and worth reading. The so-called AI tell may cause a consumer of the content to misclassify the human-created work as AI and trust it less.
Finding trusted human‑generated content amid the AI swarm in music and data‑driven insights
There are parallels between the music industry and the investment insights industry.
Music consumers will split. Some will use AI playlists for a vibe. Others will seek human‑made music supported by live shows, stories behind the music, and high‑fidelity vinyl.
Investors will split. Some will use AI‑synthesized consensus to get up to speed on the market view. But to generate real alpha, they will pivot differentiated insights, which are likely to be human‑led.
How can human musicians make it clear they are the creators in an AI world?
A share of consumers will care about human‑made music. They will attend live shows and seek human connection. Many already prefer vinyl for clarity and fidelity to the original recording.
How can musicians make clear that their work is human‑made?
Taste and judgment matter. Many hits borrow from older songs, but good taste separates generic references from work that feels new and enjoyable.
Show the human side of creation. Share behind‑the‑scenes artifacts, stems, studio videos, and the story behind each song.
Leave purposeful imperfections when they serve the performance. Avoid digital artifacts. If a brief wrong note is not jarring, consider leaving it.
Push beyond current boundaries. Consensus playlists will always have an audience, but originality that resonates builds loyalty.
In live music, errors happen and the song continues. Recorded human‑made music may be rewarded for small imperfections, as long as they do not distract from the art.
How can humans behind data-driven insights make it clear they are the creators in an AI world?
There are parallels in the advice to the human-musician and the human-insight-creator.
Investors often need the consensus view to locate an edge. AI is excellent at synthesizing that view. To gain an edge, investors need non‑consensus views that ultimately prove correct.
Authors must convey the story behind the insight without losing reader trust.
The content is also a product. Define the job to be done and deliver it. Make sure the value of the data-driven insight is clearly shared in the content. Human judgment and taste are required. Even if the analytics are the star of the insight, it’s the content that provides the narrative to support the evidence, which drives repeat usage of the data-driven insights.
Lead with conclusions so the value is clear up front.
Move the debate forward. Add understanding rather than replicating consensus.
Provide artifacts that show method and process. Transparency builds trust.
If appropriate, allow minor imperfections that do not cause confusion. Readers may read them as human signals, but clarity must come first.
Creators must communicate clearly so the target audience can understand and act. That was true before generative AI. Clear writing also helps AI agents that filter the avalanche of content. The risk is that valuable human‑based insights become harder to find and the economics of bringing them to market erode, much like human‑made music. Use AI to improve clarity and impact the way digital tools polish a recording.
I believe we are not at the point where AI can out-think or out-create humans who create exceptional music and accurate non-consensus data-driven insights.
Concluding thoughts
I am not gatekeeping AI‑created music. If you like the songs, enjoy them. If The Velvet Sundown or Rick Beato’s Eli Mercer track is your jam, go for it.
I think the world will split between listeners who like AI‑created music and those who do not. AI will crowd out the economics of human‑created music. Scarcity finds a price. Live shows, vinyl, and behind‑the‑scenes artifacts will matter more.
Investors and decision makers will split as well. AI can already replicate the consensus view and produce convincing research. An edge does not come from consensus summaries, no matter how polished. It comes from being early, different, and correct. Top‑ranked analysts and exceptional musicians create work that today’s AI does not.
-Jason DeRise, CFA (real musician and author)
Sometimes I have to remember that even though the Data Score Newsletter and DataChorus LLC draw analogy from creating and conducting music similar to using raw data and tools to generate new insights, this isn’t a music industry blog… but I kind of what chat more about AI music in the comments. :) are we ready for a world where we can’t easily tell if music is human or machine made?
But, yes, also keen to hear if the risk of client inboxes being oversaturated with AI slop is a real concern of others?