Issue #65 // Multiplex Molecular Biosensing
Thoughts On Continuous Protein Biosensors
Liked this piece? If so, you’ll want to check out other articles in Sequence & Destroy’s Wearable Technology & Biometrics collection. Additionally, you can show your support by tapping the 🖤 in the header above. It’s a small gesture that goes a long way in helping me understand what you value and in growing this newsletter. Thanks!
Issue № 65 // Multiplex Molecular Biosensing
I’ve been involved in the wearable technology industry for more than a decade now, first as a technical research consultant and scientific advisor to early-stage companies in this space and later as a co-founder at NNOXX. During this time I’ve seen two parallel trends unfold: the first is a massive cost-reduction for existing sensing modalities, and the second is a significant increase in the number of biomarker measurements that can be obtained with a wearable device.
I still remember working with the original Omegawave system—a cumbersome, expensive contraption that felt like a relic from the Soviet cosmonaut training program (in a way, it was). Omegawave was the first commercially available device to measure HRV and DC potential with the explicit goal of predicting "readiness" and was a precursor to Whoop and Oura. Back then, wearable devices were limited to a small number of measurements like heart rate, HRV, blood oxygenation, and skin temperature—vital signs that any 20th century physician would recognize. Today, we live in a different world entirely with muscle oximeters, portable metabolic analyzers, in ear EEGs, hydration sensors, and more being made available all the time.
After all of this progress, we’re approaching a conceptual precipice. The industry is now experiencing what looks to be a natural continuation of these trends with molecular biosensors capable of measuring specific proteins and metabolites in blood and interstitial fluid continuously. Companies are already developing devices that can measure real-time blood lactate, monitor ketone levels throughout the day, and track inflammatory cytokines like IL-6 at regular intervals. Unsurprisingly, VCs have taken notice, funding startups that promise to monitor various proteins with minimally invasive sensors continuously and in real-time. The implicit assumption seems to be that if measuring one molecule—glucose—revolutionized diabetes management, then surely measuring other new molecules will unlock similar insights for a wide range of applications.
But what if this assumption is wrong? What if molecular biosensors aren’t a natural progression from our current wearable technologies—what if they represent a category shift that the industry as a whole doesn’t seem to recognize? Continuous protein measurements and other molecular biomarkers aren’t just higher-resolution versions of gross physiological metrics like heart rate or blood oxygenation (collectively termed 'digital biomarkers'). They are fundamentally a different kind of measurement that requires a different interpretive framework. What follows are three connected arguments:
First, molecular biomarkers operate according to different principles than the digital biomarkers we’re accustomed to tracking;
Second, current business incentives are driving development toward single-analyte sensors that may not deliver their promised value; and
Third, alternative paths—as opposed to those currently being pursued—might better match the technology to contexts where its capabilities are most needed.
Digital and molecular biomarkers are complementary, but they’re fundamentally different types of measurements that each require distinct interpretive frameworks. The jump from measuring heart rate to measuring individual proteins isn’t incremental progress—it’s moving from integrated physiological outputs to individual molecular components. More precisely, it’s crossing layers of biological abstraction—from emergent system-level phenomena down to the molecular entities that underlie them. From my experience working with both digital and molecular biomarkers independently, as well as developing methods to integrate them via knowledge-graph-based systems, it’s clear that the industry is attempting to apply a digital biomarker framework to molecular measurements—and that approach is built on a flawed assumption about how proteins work.
Consider what happens when you measure Interleukin-6 (IL-6) continuously. IL-6 is commonly described as a pro-inflammatory cytokine, one of the key mediators of fever and the acute phase response. It would seem reasonable to interpret elevated IL-6 as indicating more inflammation. But the biology is more complex than this. IL-6 can signal through two different pathways: classical signaling via membrane-bound receptors tends toward regenerative and anti-inflammatory effects, while trans-signaling through soluble receptors drives chronic inflammation. The same molecule at the same concentration can have entirely different biological meanings depending on context, which I previously wrote about in Issue #55: Molecular Moonlighting.
Interleukin-6 isn’t unique in this regard. All proteins exist in dynamic networks where function emerges from context, relationships, and timing. A kinase that phosphorylates one substrate during cell cycle progression might phosphorylate a different substrate during stress response, creating entirely different downstream effects. The protein hasn’t changed—its relational context has. This has implications for how we should think about molecular biosensors. The quantified self movement has largely embraced single-molecule monitoring: track your cortisol for stress, measure your C-reactive protein for inflammation, monitor your glucose for metabolic health. But, proteins are components within networks, not standalone indicators. Measuring a single protein is like measuring the voltage across one resistor in a complex circuit—the measurement may be accurate, but its meaning depends on the state of the rest of the system.
This is where molecular biosensors differ from the gross physiological metrics we’re used to tracking. Heart rate and blood oxygenation are already emergent phenomena that integrate signals from multiple systems. They represent the output of countless molecular interactions, abstracted up to a level where single measurements carry interpretable meaning. A single protein measurement, by contrast, is looking at one node in a network where meaning comes from relationships rather than absolute values.
Glucose monitoring deserves special consideration here, since it’s the standard reference point for continuous molecular measurement. Glucose occupies an unusual middle ground between gross physiological metrics and true molecular biomarkers. While glucose is technically a molecule, its behavior in the bloodstream functions more like a systems-level measurement. Glucose concentration reflects the integrated output of multiple regulatory processes—insulin signaling, hepatic glucose production, peripheral uptake, hormonal counter-regulation—operating within relatively stable kinetic parameters. The interpretation of a glucose value doesn’t depend heavily on knowing the concentration of specific enzymes or transporters at that moment; the meaning is largely contained in the glucose value itself, contextualized by simple factors like food intake and activity. This is precisely what makes it tractable as a single-analyte measurement.
Most proteins don’t work this way. Their function depends on post-translational modifications, binding partners, subcellular localization, and the expression levels of their targets and regulators—information that isn’t captured in a concentration measurement. Glucose monitoring succeeded not because all single-molecule measurements are inherently useful, but because glucose happens to be one of the few molecules whose concentration alone carries robust meaning across most contexts. The practical implication here is that single-analyte molecular biosensors will have limited utility until we can measure enough proteins simultaneously to capture meaningful biological context. The technical achievement of measuring proteins like IL-6 in interstitial fluid is genuinely impressive, but the value proposition may be more limited than current market enthusiasm suggests. At the same time, the longer-term potential once multiplexing becomes feasible may be under-appreciated.
Building a multiplex molecular biosensor requires substantial capital, sophisticated engineering, and regulatory pathways that are still being established (and rapidly changing at the time of this writing), creating an interesting challenge for technology development. For a small biotech company, developing a single-analyte sensor is the more tractable path. You can focus on one molecule, establish a clear use case, validate the technology, navigate regulatory approval, and potentially become an acquisition target for established players like Dexcom or Abbott, among others. This pathway is already clear, and it makes business sense. Major continuous monitoring companies have strong incentives to expand into molecular sensing. But there’s a potential issue with this trajectory. When these technologies get integrated into existing platforms, they’re likely to be presented as independent metrics—just another number on the dashboard—rather than as nodes in a network that require contextual interpretation. This reflects the current business model of these companies, which centers on providing users with actionable numbers they can understand and respond to.
The alternative would be to develop multiplex sensing from the outset, but this requires investors who understand why single-analyte approaches might not deliver the anticipated value and who are willing to fund a longer, more capital-intensive development path toward a market that doesn’t yet exist. This is a harder pitch in the current early 2026 funding environment.
That said, there may be alternative entry points worth considering. Critical care medicine already involves multiplex measurements through continuous monitoring combined with frequent blood tests. Clinicians in intensive care settings are accustomed to thinking about complex physiological states rather than individual metrics. A multiplex interstitial fluid sensor that helps predict complications like sepsis before vital signs change would have clear clinical utility without requiring a shift in how practitioners interpret biological data. Once proven in this context, the technology could potentially scale to consumer applications. This inverts the typical medical device development trajectory, but it might better match the technology to contexts where its value is most apparent.
One concern about the incremental single-analyte path is that design decisions made early on can constrain future development—a topic I discussed in Issue #48: The Garden of Technological Possibilities. Sensor architecture, regulatory strategy, manufacturing processes, and user interfaces optimized for single measurements may not easily extend to multiplexing. Adding measurements later isn’t just an engineering challenge; it may require rebuilding substantial portions of the technology stack and rethinking how information is presented to users.
When multiplex molecular biosensors do become practical, they’ll enable some genuinely new capabilities. Rather than reporting that a single biomarker is elevated, they could describe an individual’s network state and how that relates to physiological outcomes for that specific person. We could generate personalized protein expression networks, identify which regulatory hubs are most important in an individual’s biology, and potentially detect transitions between physiological states before conventional metrics change. This represents a form of precision medicine that’s complementary to genomics—these networks are dynamic and change with disease progression, treatment, and recovery.
We’re not at that point yet. The current development trajectory shows many parallels to early genomics, where there was an assumption that identifying components would quickly translate to understanding systems. The Human Genome Project, for all its triumph, didn’t immediately yield the therapeutic revolution that many expected—because knowing the parts list doesn’t tell you how the machine works. We’re building increasingly sophisticated tools to measure individual proteins while grappling with the question of how much information a single measurement actually provides when biological function emerges from molecular relationships.
The question facing the field is not whether we’ll eventually need multiplex approaches—the biology makes that clear. The question is whether we’ll recognize this early enough to avoid spending years and considerable resources developing single-analyte sensors that provide limited actionable information. From where I sit, the current trajectory looks misguided rather than suboptimal. The industry is building toward a capability that glucose monitoring seems to validate, but glucose is the exception that proves the rule. Most proteins require network context for meaningful interpretation, and no amount of sophisticated algorithm development or machine learning will extract signal that isn’t there in the first place.
Investors and companies have a choice: continue down the incremental path of single-analyte sensors because it fits established business models and funding timelines, or acknowledge that we’re working at a different layer of biological abstraction that demands different technological and interpretive approaches from the outset. The technical achievements in molecular sensing are real and impressive. The question is whether we’re building the right things, or just the things that are easiest to build and sell with our current conceptual frameworks.
Thanks for reading! If you found this post useful, please consider subscribing. I share hands-on computational biology techniques, fresh ways to think about tough problems, and perspectives on a range of related topics. All free, straight to your inbox.




