Four Things I Heard at DataDrivenPharma That Matter

I spent last night at DataDrivenPharma, including a panel on how AI is reshaping scientific workflows. The conversations I had, on stage and off, left me more convinced than ever that we’re at an inflection point. Not in the hype-cycle sense. In the practical sense: the industry is moving from asking should we use AI? to wrestling with how do we actually make this work?
First, a thank you to Ilya and the DataDrivenPharma team for putting together a genuinely substantive event. The caliber of the conversations, and the willingness of attendees to get into the real challenges rather than the talking points, made it one of the more valuable evenings I’ve had this year.
A few themes stood out. They’re worth sharing because they reflect where the field is right now and where the real opportunities are.
1. Team Flow, Not Just Individual Flow
Dipen Sangurdekar (VP of Data Science at KSQ Therapeutics), who was on the biotech panel, introduced an idea that stuck with me: team flow. We talk a lot about individual productivity, how AI helps a single scientist or data engineer move faster. But the bigger unlock is when a group of people can collaborate together alongside AI and agents, maintaining shared context and momentum across a project rather than just within a single task.
This resonated because we’ve been experimenting with exactly this at Manifold. The collaboration tools in today’s major AI products (ChatGPT, Claude, and others) are still limited when it comes to true multi-person workflows. They’re built for individual sessions. The opportunity is in building for the team: shared analytical context, collaborative exploration, and the ability to build on each other’s work without the usual coordination overhead.
In life sciences, this matters enormously. The work is inherently cross-functional and often cross-organizational. A biomarker discovery effort might involve a computational biologist, a clinician, a data engineer, and a biostatistician, all of whom need to contribute their expertise without losing the thread. Team flow is the real productivity frontier.
2. Focus on Cutting-Edge Work
Two related themes came up repeatedly. First, that data scientists and computational biologists should invest in learning more biology, not just methods. The most effective people in this space are the ones who can connect the computational output to the scientific question. Domain expertise isn’t optional anymore. Instead, it’s the differentiator.
Second, and this came up directly on my panel: AI should free researchers to do more cutting-edge work by handling the operational overhead that currently eats their time. One panelist gave an example I keep coming back to. Reformatting plots to a larger image size for a publication submission. That kind of work is real. It takes real time. And it’s exactly the kind of task that should be automated so that talented scientists can focus on the science that actually requires their expertise.
This connects to something we think about constantly: the most valuable use of AI in life sciences isn’t replacing expert judgment. It’s removing the friction around it. When you can access more high-quality biological data with more data modalities—molecular, phenotypic, clinical, real-world—without the usual tax, you expand the surface area of questions your experts can actually explore. We’ve written about this as the Life Sciences Chasm: the gap between expert intent and technical execution that slows down even the most capable organizations. Our work with the Komen Tissue Bank and with Foundation Medicine are good examples of what closing that gap looks like in practice: taking a world-class multimodal dataset and making it explorable by researchers in their native scientific language, rather than requiring them to navigate schemas and data dictionaries.
3. Narrow Agents, Human Decisions
Governance came up in nearly every conversation I had. The theme I took away was clear and consistent across clinical leaders, data scientists, and operational teams: high-stakes decisions should be made by humans. Full stop.
What’s evolving is the framing around that principle. People aren’t talking about AI replacing jobs. They’re talking about AI automating individual tasks, specific, well-scoped pieces of a larger workflow, so that humans can synthesize the outputs and make better-informed decisions. The natural progression of our work with AI isn’t automation of decision-making. It’s the augmentation of it.
A pattern I heard repeatedly: teams are gravitating toward narrow agents and purpose-built tools rather than black-box systems. The reasoning is practical. When you use a narrow tool, you can reason about what it did. You can validate its output. You can explain it to a regulator. In a field where the cost of errors is measured in patient safety, that transparency now must be table stakes.
Take cohort feasibility as an example. Today, checking whether a biorepository has sufficient samples matching a study's inclusion criteria — the right molecular data, the right demographics, the right longitudinal follow-up — can take weeks of back-and-forth with data coordinators. A well-scoped agent can do that in minutes, and because it's operating on a defined task with visible logic, you can trace exactly how it got its answer. That's the design philosophy behind our Agent OS: not a black box that replaces judgment, but a set of purpose-built agents that make expert work faster and auditable.
4. The Pilot Explosion, and What Comes Next
One observation that cut across every session: the sheer volume of AI pilots happening inside organizations right now. No one is sitting this out. Teams are exploring potential areas of value across the board, from data harmonization and analytical workflows to evidence generation and clinical trial optimization.
But there’s a real tension emerging between exploration and operationalization. Organizations are trying to balance creative experimentation with harder questions. What’s ready for broader rollout? How do you assess performance and risk? How do you move from a proof of concept that impressed a team of five to a production workflow that serves a hundred?
This is where the conversation is shifting: from can AI do this? to how do we scale it responsibly? And it’s the right question to be asking. The organizations that figure out governance, performance measurement, and integration into existing workflows will be the ones that capture real value. The ones that stay in perpetual pilot mode won’t.
Where This Is Heading
What struck me most about DataDrivenPharma wasn’t any single talk or panel. It was the consistency of the signal. Across different organizations, different roles, and different therapeutic areas, people are converging on a shared understanding: AI’s value in life sciences isn’t about replacing expertise. It’s about making expertise more accessible, more collaborative, and more impactful at the point of work.
That means building for team flow, not just individual productivity. It means narrow, transparent agents that humans can trust and validate. It means platforms that bring more data together so experts can ask bigger questions. And it means moving past the pilot phase into workflows that actually scale.
This is exactly the problem we’re building Manifold to solve. And conversations like the ones I had last night make it clear that the urgency is real, and growing.
If you were at DataDrivenPharma, or if any of these themes resonate with what you’re seeing in your own organization, I’d welcome the conversation. Feel free to reach out directly or book a few minutes with us. I’d love to hear what you’re working on.