The Human Layer with Ashley van Heteren: "With AI, we're entering a Second Atomic Age" #Radiogenesis
Ashley van Heteren explains how AI-native biotech is reshaping drug discovery, using radiopharma and generative models to target cancer with greater precision and speed.
For mainstream audiences, the new AI era is mostly about productivity or creativity. The media emphasizes what AI can do for you: acting as an assistant or a coach, writing code, booking flights, or making reservations at fancy restaurants. But this is just the tip of the iceberg. The impact of generative AI on deep tech is immense and will likely transform our lives far beyond what we ever anticipated.
Ashley van Heteren sets out to build what they call the future of AI-native biotech. Her company, Radiogenesis, co-founded with Marnix Koops, is part of a new generation of startups built without legacy constraints and able to tackle problems with genuinely fresh perspectives. In their case, that problem is cancer. By using AI to design next-generation radioactive drugs, they aim to target tumours more precisely, reduce collateral damage, and lower development costs so new treatments can reach patients faster and at scale.
👉 Article originally posted on TTY
TL;DR;
Ashley van Heteren spent years feeling like a cancer researcher trapped in a management consultant’s body. Then radiopharma hit an inflection point, foundation models became practical, and her two parallel lives merged. She left McKinsey’s AI arm to co-found Radiogenesis, an AI-native biotech designing radioactive drugs that seek out and destroy metastatic cancer cells. In this conversation, she explains why generative AI is the missing piece for deep tech. And why she thinks we’re entering a second Atomic Age.
From Nuclear Physics to AI-Native Biotech
Ashley, could you introduce yourself and Radiogenesis in a few words?
So I’m Ashley. I like to describe myself as somebody who’s been forged by my Brownian motion walk of life - small, random movements through time. First a nuclear and medical physicist by training, turned cancer researcher, and later a Partner with McKinsey’s AI arm, QuantumBlack, focused on building disruptive AI products for the pharma industry.
My passion lies at intersections - using AI technology to bridge for instance tumor biology, radiobiology, biochemistry, and nuclear physics. I’m passionate about bringing that disruption to biotech.
That’s why we are building Radiogenesis - an AI-native biotech that uses modern generative, foundation and physics-informed AI to design targeted radioactive drugs - think tumor-seeking molecules that carry a radioactive payload - to treat cancers that are spread throughout the body and really hard to treat today.
I always ask what triggered people’s careers. Spoiler: we’ve already had this discussion, and I find the answer both amusing and unexpected. A reminder of how small things can have such an impact on someone’s life in their early days. So how did a Georgia girl decide to study nuclear and radiological engineering in 1999?
Oh no, I definitely regret having this conversation now that I’m called out globally. ; ) But indeed, as an undecided early college student, I remember needing to select one last course to fill my schedule. That same year the James Bond movie The World Is Not Enough had come out and Denise Richards played this bond girl nuclear weapons expert. Right about that time I saw an “Intro to Nuclear and Radiological Engineering” class. Well, why not? I thought. And now, here I am. Thank you, Denise Richards.
When we look at your trajectory, from Georgia Tech to McKinsey at QuantumBlack, it feels like Radiogenesis is the synthesis of all the knowledge you’ve built over the years, with generative AI as the missing piece of the puzzle.
It’s funny you mention the trajectory, because for a long time, it didn’t feel like a straight line. It felt like two separate walks of life - I even joked I felt like a cancer researcher trapped in a management consultants body.
And then, radiopharma hit this inflection point. Suddenly the field is moving fast. At the same time, generative biology and foundation models are becoming more practical, as in you can use it in a real pipeline, not just publish an academic paper.
Almost overnight, my inbox was split 50/50: half radiopharma, half AI. And with my brain switching back and forth the whole day between topics, it became pretty clear that AI was the missing piece to help solve the complexity of the Radiopharma field.
Just like that, these two paths no longer forked - they merged. Starting Radiogenesis didn’t feel like a leap into the unknown; it felt like the most obvious move I’ve ever made. I’m taking the cancer researcher and the AI builder and finally putting them in the same room.
Also, QuantumBlack is a very specific division of McKinsey, not the typical business consulting arm one might think of, and you mentioned it was perhaps the best place to test your entrepreneurial skills before launching your own venture.
QuantumBlack is a pretty unique corner of McKinsey. Unlike traditional consulting, we build code, models and applications where solutions don’t exist. Often, our clients don’t quite know what type of algorithmic approach they need, the objective function or even what problem they would like it to solve.
What energized me was getting obsessed with finding the pain point that, if solved, would genuinely change the business and linking this to tangible business improvement. Faster cycle times? Lower cost? Better decisions? More revenue?
You need impact quantified well enough that a CEO believes it’s critical for their business, and not in a hand-wavey way, but where you can trace impact back to the P&L.
That value pull-through discipline from getting super clear on the pain point → defining the solution → tying it to measurable value → how it impacts P&L bottom line is the strongest entrepreneurial training I could’ve asked for. Most people in tech only focus on one or two pieces of this, but it’s the pull-through that’s powerful.
The New TechBio Playbook
For the last decades, software companies have been the main focus of the startup ecosystem, and deep tech was usually financed by specialized funds because of the technical risks involved, much longer feedback cycles, occasional investments in scientific labs, and uncertainty around their ability to scale or just to switch from a research culture to a commercial one. Can you explain how the current generative AI has reshaped our perspective on the deep tech industry?
Generative AI and molecular foundation models have shifted how we think about deep tech because they have meaningfully compressed the feedback loop.
For decades, a lot of deep tech (and especially techbio) looked “uninvestable” to generalist tech because the feedback loop was long and blurry: development was capital-intensive, timelines to output (and revenue generation) were long, and it wasn’t always clear if teams were building something scalable or generating interesting research and building databases that could never commercialize.
What’s shifted is that modern generative and foundation models let teams design and triage before they burn months in the lab. You still need experiments (biology always gets the final vote), but you waste fewer cycles on dead ends and you can show progress in quarters instead of years.
The other thing that’s changed is earlier commercial proof. New techbio companies have landed large pharma collaborations surprisingly early, providing validation that there’s real willingness to pay for design engines, not just for late-stage assets. For example, Generate:Biomedicines announced a collaboration with Amgen in January 2022 (4 years post spin-up) that included $50M upfront and up to $1.9B in potential value, and then a Novartis collaboration in 2024 with $65M upfront and over $1B in milestones. Nabla Bio similarly announced collaborations with AstraZeneca, Bristol Myers Squibb, and Takeda, with first partnerships announced only 2 years post launch.
Finally, I do think there’s been a mindset shift inside techbio itself: fewer companies trying to be “SaaS for biology,” more companies building repeatable R&D machines that generate valuable assets and partnerships. That dual-revenue model of platform-like value creation with asset-level upside is starting to look much more underwritable to generalist tech VCs. That said, you still see a bifurcation in the market of those who are still uncomfortable with that value proposition and those that see the value and are willing to lean in.
What Is RLT
Let’s dive into the specific domain Radiogenesis operates in. Biotech can be quite intimidating for tech profiles building regular software or infra tools. Can you explain what Radioligand Therapy (RLT) is, and how it is used today in the arsenal for cancer treatment?
RLT is a form of targeted radiation that is delivered as a drug. It consists of a targeting agent (the ligand) that searches for proteins that are found much more on tumor cells than healthy cells. These targets carry a radioactive payload that emits radiation over a short distance to kill the cancer cells.
Why is this powerful? Because once a cancer spreads, you need treatments that can reach disease throughout the body. Surgery and external-beam radiation are powerful, but mostly for localized tumors. So once a cancer spreads, physicians lose many options to fight it.
Radioligand therapy puts radiation back into the arsenal of oncologists to fight metastatic cancer. It gives cancer patients another way to fight back.
Chemo and radiotherapy are brutal, immunotherapy only works for small populations. Could RLT eventually become an earlier line of treatment rather than a last resort?
RLT is already expanding its role in cancer care. For instance, Novartis’ RLT for metastatic prostate cancer got expanded approval last year for earlier use (before chemotherapy). That’s a real signal of moving earlier.
Where this goes next, in my view, is less “RLT replaces everything” and more that it becomes a backbone modality in the arsenal to treat cancer, used earlier in selected patients and increasingly in combinations (with hormone therapy, chemo, other targeted agents).
Of course, there is a long way to go, and a lot to solve to bring adoption to more hospitals, as this type of therapy requires specialized infrastructure, training, regulation and isotope supply chains. But there are strong tailwinds from the larger pharma industry to solve those points and help establish this therapy.
Now that we understand the therapy itself, what does the team and tech stack building it actually look like? How different is an AI-native biotech from traditional biotech companies, and is there overlap with regular software startups?
Indeed, AI-native biotech teams look quite different. Non AI-native biotech often starts with a specific drug / IP and builds a team to answer: does it work, is it safe, can we make it?
AI-native biotech starts by proving that we can repeatedly generate new IP with faster design-test-learn cycles. So our early team constructs shift towards AI + engineering in addition to biology and chemistry.
The stack to deliver this is essentially: Design → Predict → Filter → Test → Learn. Generative models propose candidates, structure/biophysics + developability constraints filter them, wet-lab partners validate, and results feed back into the system.
There is overlap with software startups, including data infrastructure, reproducible pipelines, versioning/experiment tracking, and an MLOps mindset.
Profiles we look for are hybrid builders. ML engineers with scientific instincts, computational modeling folks who can code, strong data/software engineers, and experimental partners who can move fast in tight iteration loops.
Radiogenesis and AI-Driven Discovery
Can you explain how you use AI at Radiogenesis, and how fast do you think you will be able to accelerate the discovery of peptide radioligand therapies compared with traditional methods?
We use AI at Radiogenesis in two modes: first-in-class discovery and best-in-class engineering.
First-in-class is about speed and creativity, finding new binders (the “seekers”) for new tumor targets so radioligand therapy can move beyond the handful of cancers where it works well today. Traditionally, discovery still often looks like a huge wet-lab search: build or screen massive libraries, find one weak “hit,” then spend months iterating one amino acid change at a time. It works, but it’s slow, expensive, and it explores a tiny fraction of what’s possible.
Our approach starts earlier in silico (using AI to predict outcomes we would expect in a real-world lab setting). We use molecular foundation models (think LLMs for biological molecules) and generative AI to design peptides de novo (from scratch), test and filter them before they ever touch the lab. This means we can explore a much larger solution space, and we already walk into the lab with a shortlist of candidates that have already been structurally predicted to work in our real world setting. This reduces throughput for cost and speed.
Best-in-class is the part most people underestimate. A binder can look great on paper and still fail later - when you radiolabel it, when dose-limiting toxicity shows up, or when it’s hard to manufacture reliably. So we’re building the platform to encode those “late failure modes” as early design constraints, so we build better from the start.
What impact do I think it will have? Some AI drug discovery companies have shown that the early discovery-to-candidate phase can compress dramatically, from 4-6+ years to 9-18 months. Our goal is to meet that AND improve output. We’re not trying to just build one good molecule faster; we’re trying to build an engine that can reliably build many, better.
Companies like Insilico, Aqemia, and Absci have shown that an AI-first approach can work. Yet many established players still run wet lab first, then apply AI to that data. When we last spoke, you called that legacy thinking. Why?
First of all, huge respect for the trailblazers in the field you mentioned. A decade ago, the dominant belief in AI drug discovery was that the real moat was data. Most teams had access to broadly similar machine, deep learning methods, so most value came from having high quality, well-structured and interconnected data.
So the rational workflow became “wet lab first”, AI-second: run in-vitro screens, generate data, then use ML to fit that local dataset and either optimize or look for signal that can be optimized. That can certainly work, but it’s a data-driven hill-climb.
What changed in the last few years is that foundation models are providing these learnings. You’re no longer starting from scratch with a small dataset; you start from models like AlphaFold or RFDiffusion that are built from unparalleled large datasets and encode structural and biophysical regularities. This has shifted the purpose of wet-lab experimentation from creating a ground truth dataset to a validation and learning loop. We’ve shifted from AI as analytics to AI as an engine.
That’s also why the field is bifurcating. On one side, teams building broad foundation models still need huge datasets and scale. On the other, you can build a focused application engine where the compounding advantage is iteration speed and embedded known-how: generate → test → learn, with each cycle tightening the model and the constraints.
I can absolutely imagine the pendulum swinging back toward data moats in some areas, similar to a “are you a cloud provider vs application layer” player analogy. But right now, in some drug discovery applications, learning velocity can outpace legacy data scale.
In AI-first discovery, the model is often said to be only as good as what it’s trained on. Some companies generate proprietary wet lab data to create defensible moats; others bet on better architectures or physics-informed priors. Where do you think durable competitive advantage actually comes from in this space? Is it data, algorithms, domain expertise, or something else?
Data, algorithms, and domain expertise matter, but honestly none of them are defensible in isolation anymore.
The idea that whoever has the most data wins is already outdated. Most large-scale biology and chemistry datasets have struggled to translate to durable advantage as they are noisy and finding biological signals is challenging.
On the algorithmic side, model architectures and training approaches are converging shockingly fast, with the latest and greatest model often having a 6-12 month half-life in the current ecosystem. The companies that will win will be the ones that can continuously plug new model architectures into high-value data loops.
Domain expertise is often the underappreciated dark horse. In biology and chemistry, if you let the model optimize against the wrong read-out, you can generate a huge volume of impressive, useless molecules or candidates.
So if I had to pick, I’d say the moat is neither data nor algorithms nor experts in isolation. Durable advantage is the AI system design that brings it into a self-reinforcing decision engine. That includes a tightly scoped, high-signal proprietary dataset built around enabling a specific decision, an architecture-agnostic platform that can swap in the latest open or proprietary models without rebuilding the pipeline; an experimentation stack where every lab run is chosen for direct information gain; and a feedback loop where model predictions update the experimental roadmap weekly and experiment results likewise continuously reshape the model.
Once that loop is running, the gap between competitors using perhaps even similar base models widens over time, because your system is learning on problems that are tuned to your business, not the latest scientifically reported AI benchmark. In practice, this is harder to do than you might think, which is why iteration cycle speed = competitive advantage.
There are now 31 AI-discovered assets in clinical trials and 30% of pharma R&D involves AI. The “show me the candidates” critique has been answered. How does that change the conversation with pharma partners?
The tone has definitely shifted. Five years ago the conversation was “does AI-driven drug discovery really deliver value”, and now it’s “where to place AI bets and how to integrate it into our R&D operating model.” This is no longer a nice-to-have, but it’s a strategic, competitive and capability imperative.
The challenge is that AI is moving at start-up speed, but adoption inside pharma moves like enterprise… AI in drug discovery is the small, nimble speedboat, but they need to steer the large tanker.
Thus, pharma is paying really close attention to this new era of #AI-native biotech. We are fundamentally challenging the current processes of drug development that have been the same for decades. I think it’s up to us to show exactly how this future AI-engine driven R&D operating model looks like and what it can deliver.
Trial phases represent 62% of drug development costs, but most late-stage failures originate in discovery. Is fixing the front of the funnel the real leverage point?
Certainly one way to think about it is that clinical trial costs are the tax you pay for early-stage uncertainty. Costs and failure rates are high in clinical trials because by the time you’re in humans, you’re carrying a lot of hidden risk you couldn’t fully see earlier. It’s genuinely hard to optimize a drug candidate across all the dimensions that matter at once, for instance across tumor selectivity, tissue distribution, safety margins, manufacturability.
AI helps you explore much more solution space, improve metrics we know contribute to clinical failures, and kill bad candidates much earlier. We’ve gotten to the proof point of “get new drugs into trials”, which has been largely a speed-impact argument. What we are all waiting for is the next step of “need fewer clinical trials”, whereby designed drugs have a higher success rate.
The next real breakthrough, in my view, is translatability, when we can reliably predict what will hold from in-vitro and animal models to humans. That has been elusive so far, but if we crack it, it’s one of the biggest levers on true capital efficiency.
If improving discovery is about reducing hidden risk early, what would it take to also transform how we validate drugs in humans? Do you see a credible path toward AI reshaping clinical testing itself, and will regulation evolve fast enough to support that shift?
Absolutely, AI for drug discovery is only one angle, but there are other tech evolutions that I’m personally excited about across the drug development lifecycle:
Simulated clinical trials: There has been a lot of progress already to use AI to compress time and improve quality of trials through predictive site selection and trial design. What is further afield is moving towards a world of Digital Twins - biological simulations where we can run a thousand Phase 2 trials before any medicine goes into humans. We shouldn’t be asking “does this drug work on a generic human?” We should be asking “How does this protein interaction play out in a 65-year old with these specific comorbidities?”.
Better prediction of preclinical translation: To the translatability point I mentioned earlier, today we spend so much time optimizing in-vitro or in animals only to find out the result doesn’t hold in humans. There is a lot of exciting innovation to design better human-relevant models (organoids, microphysiological systems), paired with AI models that can integrate multi-omics, imaging, distribution and toxicity signals to predict what will actually happen in people. This system, once created, feeds directly into the clinical trial simulations and would create a step-fold change in AI-enabled learning.
We will need regulatory innovation that matches our scientific innovation. On this front, there are positive signals of regulatory momentum, such as the FDA’s (U.S. drug regulatory body) plan to phase out animal models, through stronger adoption of AI-based simulations and alternative technologies like organoids.
You’ve described the feedback loop as the real moat: generate, test, learn, repeat. What does the first full cycle of that loop look like for Radiogenesis, and what do you need to prove in the next 12–18 months?
We are currently at the Generate phase – meaning for our first cancer targets, we have designed a library of >3000 candidates predicted to bind to the target, be stable, and minimize major toxicities. This is already a big step forward over today, where huge libraries are used of randomized sequences (similar to finding a needle in a haystack), vs our curated and custom-designed candidates.
The next step (the ‘Test’) is taking that into the lab, synthesizing these compounds and validating our predictions. Importantly, this is not a “pass / fail” but a learning exercise – we profile 11 metrics plus a toxicity scoring system, and what we will Learn is where is our model (and the candidates we produce) is strong and where can it be improved. We update and Repeat for the same target to optimize results, expand to new functionality and new targets.
What we are proving in the next 12-18 months:
(1) our platform delivers high-quality drug candidates at speed (taking a lead candidate from in-silico (in the computer) to in-vitro (lab results) to in-vivo
(2) we show scalability and repeatability (multiple targets, candidates)
(3) we are commercially of interest to pharma (partnership secured or late-stage)
(4) we deliver the unexpected (I always like to overdeliver, so how we get to an unexpected innovation, functionality or target not on the radar is always in the back of my mind)
On a lighter note, could you be our Nostradamus of the day and share some bold predictions for the future of AI across all fields?
I’m a physicist by training, so I might frame this moment as not just a tech moment, but our second Atomic Age. When we discovered the atom, we didn’t immediately grasp that we had actually discovered the base layer for everything. It fundamentally re-engineered how we grow our food, our ability to create and use energy, and has redesigned everything we touch in daily life from our shampoo to the phone in our pocket.
AI is that same kind of foundational discovery. My bold prediction? In ten to twenty years, we won’t be “using” AI; we will be living in a world where the distinction between digital and physical has collapsed because AI has re-optimized the structure of our industries and daily life.
I will also draw an additional parallel… the atomic bomb was the most destructive force we’ve ever created, yet its existence has arguably enforced one of the longest periods of relative peace vs. potential for destruction. I believe technology trajectories are cyclical. So I imagine our current AI advances could follow a similar trajectory. I leave it to the readers to imagine what that looks like exactly…






Great interview of a fantastic founder! 😉