Every Founder Eventually Goes to War

Managing Conflict Isn’t a Phase, but the Job

The moment you realize you’re in a fight is rarely dramatic.

It doesn’t happen during a shouting match or a slammed door. More often, it arrives quietly—in a short email, a neutral calendar invite, or a lawyer copied where one didn’t used to be. The language is polite. The tone is professional. Nothing is said outright.

But something has shifted.

The incentives no longer line up. The assumptions that once held the relationship together no longer apply. And you understand—sometimes with a clarity that’s almost physical—that this relationship is no longer governed by trust, but by leverage.

This is the moment most founders misread. They assume they’ve done something wrong. That conflict represents a failure of leadership, communication, or values. That if they had chosen better partners, written better documents, or been more persuasive, this wouldn’t be happening.

That belief is comforting. And mostly false.

Conflict is not a deviation from entrepreneurship. It is one of its core outputs—and one most founders are structurally unprepared to manage.

Conflict Is Structural

Founders bring together a volatile mix: capital, control, ambition, time, and uncertainty. Each behaves differently under stress. Early on, they align easily. Everyone is optimistic. The stakes feel distant. Decisions are reversible. Trust is cheap.

As the company grows—or simply persists—those conditions disappear.

Capital seeks protection. Control seeks clarity. Ambition seeks acceleration. Time compresses. Uncertainty hardens into consequence. What once felt like shared purpose fractures into competing priorities.

This is not because people are dishonest. It’s because incentives diverge as reality asserts itself.

Early harmony is not proof of alignment. It’s proof of low stress.

Every startup increases pressure. Pressure reveals fault lines. Conflict is the result.

The Three Wars Every Founder Fights

Over time, most founders find themselves fighting some combination of three wars.

I’ve seen founders realize this sitting in conference rooms where everyone is nodding—and no one is agreeing. The slide deck moves forward. The meeting ends on time. And afterward, three different people summarize the “decision” three different ways.

1. Internal wars

These are the conflicts founders expect the least and feel the most.

Co-founder relationships degrade not because of malice, but because of asymmetry—of effort, recognition, risk tolerance, or reward. Early sacrifices are remembered differently. Contributions are reinterpreted through the lens of later outcomes.

Authority becomes ambiguous. Titles lag reality. Decision rights remain implicit long after they should be explicit. What was once collaboration becomes negotiation. What was once trust becomes accounting.

These conflicts are uniquely painful because they involve shared history. You’re not just disagreeing over strategy—you’re renegotiating the meaning of the past.

2. External wars

External conflicts feel cleaner, but they’re often more dangerous.

Partners reprice risk midstream. Regulators discover a category after it already exists. Counterparties weaponize process when outcomes turn uncertain. Competitors use rules as tools rather than constraints.

These conflicts are rarely personal. They are rarely fair.

Institutions optimize for risk minimization, not justice. They prefer clean narratives to complex realities. And they often shift the cost of ambiguity onto the most visible actor—the founder.

3. Psychological wars

This is the conflict no one prepares you for.

At some point, the mission itself becomes contested. The story you tell yourself about why you started no longer matches the reality you’re navigating. The company you built now contains forces you don’t fully control—speaking in your name, shaping outcomes you didn’t intend.

Your identity becomes entangled with a system that is no longer purely yours.

This is where founders burn out—not from overwork, but from moral exhaustion.

Why the Founder Myth Breaks Down

Popular culture trains founders to believe that vision, grit, and charisma are sufficient to overcome any obstacle.

This works until it doesn’t.

Vision does not confer authority.

Charisma does not create leverage.

Moral certainty does not translate into institutional power.

And being the founder does not mean the system will protect you.

When conflict escalates, the terrain changes.

The rules are no longer informal. Outcomes are shaped by contracts, procedures, timelines, and incentives that were invisible during the building phase.

Many founders experience this shift as betrayal. In reality, it’s a phase change.

Entrepreneurship rewards builders. It eventually tests governors.

What Actually Wins Wars

In founder conflicts, the objective function changes. The goal is no longer optimization—it is survival with options intact.

Founders often ask how to win conflicts. This is the wrong framing.

Victory is rare. Survival is achievable.

In real founder conflicts, righteousness is rarely decisive. Speed is often counterproductive. Aggression tends to escalate costs. The most reliable advantages are quieter:

  • Time — the ability to wait when others can’t

  • Clarity — knowing which battles matter and which don’t

  • Optionality — preserving multiple paths forward

  • Restraint — understanding when escalation destroys more value than it creates

The founders who last are not the most combative or the most idealistic. They are the ones who recognize that staying solvent—financially, reputationally, psychologically—is often the real win.

The Skill No One Teaches

Entrepreneurship education focuses heavily on creation: product, market, growth. It spends almost no time on dissolution, renegotiation, or conflict containment.

Yet these are not edge cases. They are recurring patterns.

Founders eventually learn (often painfully) that:

  • Some relationships end without resolution

  • Some disputes are not about truth, but about leverage

  • Some outcomes must be accepted, not fixed

  • Silence can be strategic

  • Escalation is rarely reversible

These aren’t lessons. They’re laws. Violating them makes you fragile.

Planning for War Without Becoming One

To acknowledge that conflict is inevitable is not to become paranoid or adversarial. It is to design with reality in mind.

That means:

  • Making incentives explicit early

  • Clarifying authority before it’s contested

  • Designing exits before they’re needed

  • Separating identity from outcome

  • Assuming stress will reveal differences—and planning accordingly

Every founder eventually goes to war.

The ones who endure are not the ones who deny it, nor the ones who relish it.

They are the ones who understand the terrain—and move through it without illusion.

America is a Superpower Running on Legacy Software

America isn’t declining. It’s underperforming, because its institutions can’t match its capabilities.

By Justin Fulcher

I was standing in a Pentagon conference room when a Colonel leaned over the table and said something you never want to hear about the world’s most powerful military:

“We don’t lack technology. We lack tempo.”

Two days later, a veteran I know waited nearly three months for a routine medical scan. In the same week, an American defense startup deployed an autonomous drone that could identify targets faster than their billion-dollar legacy competitors.

That contrast captures America’s moment with uncomfortable clarity:

We are a superpower running on legacy software.

America isn’t declining; it’s underperforming. 

We still dominate the frontier. American firms lead in AI, biotech, space, and advanced computing. The ongoing debate over Nvidia’s H200 chips shows that American technology still yields a comparitive advantage so strong it’s viewed as a national security threat. Even China’s chip manufacturing industry – despite heavy government subsidies and intellectual property theft – is years away from our caliber of compute. Our GDP share has held steady for decades. Even the poorest U.S. state’s GDP per capita is on par with Europe’s richest countries. 

If America were truly fading, the world would be voting with its wallet and feet. Instead, it is voting for us. Look no further than how many countries raised their defense spending to 5 percent of GDP after the U.S. asked. All while remaining the top landing spot for foreign direct investment, surpassing the second highest destination by over 100 billion dollars.

The issue is not national decline; it’s institutional drag.

Across government, healthcare, defense, and infrastructure, our core systems operate as if it were 1975. We can field autonomous targeting drones, but we can’t process a passport in under 11 weeks. We can design next-generation hypersonic systems, but we can’t build a bridge without a decade of paperwork. Agencies are buried in compliance while their missions fall behind.

The world hasn’t passed America. Our institutions have just slowed us.

And that is a far more solvable problem. Here’s what can and should be done to bring our systems up to the speed of our capabilities.

America’s fundamentals remain unmatched.

Look at the country as an outside strategist would.

No rival combines our technology base, energy capacity, agricultural abundance, financial depth, global alliances, manpower, and influence. The U.S. remains the only nation capable of projecting power, deterring adversaries, driving innovation, and sustaining a global economic system.

These are not the traits of a collapsing nation.

They are the traits of an underutilized one.

A country this strong has no excuse for institutions this slow.

Our challenges are real, but completely fixable.

Institutional stagnation is not destiny. It is the result of outdated processes, siloed agencies, and a lack of mission alignment.

Other nations have rebuilt their state capacity before: Meiji Japan, postwar Germany, early Singapore, and even the U.S. during WWII and the space race. Renewal came from clarity of purpose and streamlined execution.

That same spirit still exists today, except we have tools those eras never did:

  • AI to accelerate government workflows

  • edge computing that secures critical infrastructure

  • reshored manufacturing that strengthens national resilience

  • digital health systems that widen access

  • defense innovation that restores deterrence

If we want to restore American strength, modernizing our institutions is nonnegotiable. It is the decisive strategic advantage.

America’s biggest victories in the coming decades will come not from expanding government, but from upgrading it - rapidly.

Our renewal mechanism is stronger than any rival’s.

China can mobilize quickly, but it cannot self-correct.

Europe manages consensus well, but cannot scale innovation.

Russia can coerce, but not compete.

America’s weakness is something far easier to fix: institutional latency.

And unlike our competitors, we possess a civic superpower:

We reinvent ourselves - dramatically, decisively, and often exactly when others think we’re done.

American pessimism has been wrong for 200+ years.

It’s wrong again now.

Where We Go From Here

This is not a left-wing or right-wing project.

It is an American project.

Everyone benefits from:

  • a government that works,

  • a healthcare system that delivers,

  • a military that moves with speed,

  • secure borders and resilient supply chains,

  • infrastructure that actually gets built,

  • institutions that earn public trust.

Competence isn’t partisan. It’s patriotic.

America doesn’t need a miracle. It needs modernization.

If we refactor legacy processes, recruit technical talent into civic service, unleash American energy, accelerate procurement, deploy AI for state capacity, and rebuild our defense industrial base with urgency, the U.S. will enter a new era of national strength.

America is not a nation in twilight. America is a nation between chapters.

And the next chapter begins the moment we choose tempo over drift, capability over complacency, and renewal over resignation.

America still has the talent. America still has the tools. America still has agency.

Now we need the tempo.

The Evolution of Telehealth: From Fringe Experiment to Critical Infrastructure

The photo arrived blurry and overexposed, taken under a single, flickering bulb. A nurse typed: “Fever, 3 days. No transport today.” The clinic’s generator had enough fuel for two hours, maybe three if they didn’t run the refrigerator too hard. A video call was never going to happen. But a decision still had to.

That’s the version of telehealth I think about most often. Not the glossy demo of a doctor smiling into a webcam, but the unglamorous work of moving clinical decisions across distance when everything around it is fragile.

It’s tempting to tell a simple story: telehealth was niche, then COVID arrived, then everyone adopted it. The truth is longer and less flattering. Telehealth has been tried for decades.

It succeeded only when four conditions lined up:

  • Bandwidth - can the connection carry anything usable?

  • Billing - will the system pay for the work in a predictable way?

  • Belief - do clinicians and patients trust it enough to use it?

  • Back office - can it fit the real workflow (documentation, follow-up, labs, referrals) without doubling the burden?

I’ll use two terms carefully:

  • Telemedicine is clinical care at a distance. Diagnosis and treatment delivered remotely.

  • Telehealth is the broader system. Telemedicine plus triage, remote monitoring, patient communication, and operational back office.

So how did we get here?

Telehealth didn’t become critical infrastructure because video got better. It became infrastructure when it started to fit the systems that had to carry it. And when those systems, under stress, stopped treating distance as a special case.

I. The Prehistory (1960s–1990s): Telemedicine Before the Internet

Telemedicine began as a serious experiment funded by organizations that could afford serious experiments.

In the 1960s, Massachusetts General Hospital linked clinicians to Boston’s Logan Airport using telecommunications and later an interactive TV microwave link with tools like stethoscope and electrocardiograph capabilities.  This was an early attempt to move urgent evaluation across a short (but consequential) distance.  Around the same time, NASA pushed remote biomedical monitoring forward for spaceflight, and then brought that mindset back to Earth through programs like STARPAHC (Space Technology Applied to Rural Papago Advanced Health Care), developed with the Indian Health Service and the (now) Tohono O’odham Nation to extend care into a remote reservation using communications technology and a mobile health unit.  The military also explored telemedicine as operational support.  U.S. Army telemedicine efforts in the early 1990s helped normalize remote consultation inside constrained, high-stakes environments. 

On paper, it looked like the future. In practice, it was often structurally doomed.

The technology worked (sometimes) but bandwidth was expensive and brittle. Systems were specialized enough that one broken part could cancel a clinical day. In STARPAHC, providers reported major problems like unreliable equipment and the time burden of TV consultations.  Cost was not just equipment; it was maintenance, training, and the hidden price of asking clinicians to work differently without reshaping the rest of the process.

The deeper mismatch was workflow. Telemedicine could transmit a conversation, but it rarely carried the surrounding system: medication access, diagnostic follow-up, documentation, scheduling, and accountability. Even in the mid‑1990s, reviewers noted that most early programs failed to survive once grant funding ended and that expensive broadband video often wasn’t justified when cheaper, more reliable channels could do the job. 

A counterexample matters here: teleradiology gained traction earlier than many other telemedicine forms because it fit existing professional norms and payment realities better than live video consults did. It didn’t require a shared room at the same time, and reimbursement for radiology interpretation already had pathways that didn’t demand face-to-face contact. 

The lesson from this era is blunt: telemedicine solved narrow problems, but it didn’t fix the systems that made those gaps routine. Bandwidth existed in pockets, but billing, belief, and back office were still missing.

II. The First Internet Wave (2000–2010): Telehealth as a Feature

The commercial internet made a basic form of telehealth possible for ordinary clinics: web portals, email follow-ups, and early video consults. It did not make telehealth easy to adopt.

In the U.S., billing was the decisive limiter. Medicare telehealth coverage under the Physician Fee Schedule began in 2001 and was largely framed as a rural access exception, which often required patients to be at an approved originating site in a rural area rather than at home.  Payment rules emphasized live, interactive telecommunications, with limited allowances for asynchronous (“store and forward”) use in specific demonstration contexts like Alaska and Hawaii.  The signal was unmistakable: telehealth was permissible on the margins, but not yet a default delivery channel.

Physician behavior also slowed adoption. Most clinicians didn’t wake up wanting to add a new visit type. They were already overloaded, and telehealth often meant extra, burdensome steps.  In many cases it meant separate scheduling, awkward documentation, uncertainty about malpractice exposure, and licensure constraints when patients crossed state lines. 

Patients had their own “belief” barriers. In 2003, a choppy webcam call from a desktop computer felt less like care and more like a tech support session. Trust is a clinical ingredient.  Then, it was in short supply.

Regulation added friction where it mattered most. The Ryan Haight Act of 2008 generally required at least one in‑person medical evaluation before prescribing controlled substances via the internet, with specified telemedicine exceptions.  That law was responding to real harms, but it also reinforced a broader reality that certain high-risk clinical actions would remain tightly tethered to in-person norms.

Conversely, large integrated systems and public institutions could make telehealth work internally because they could subsidize it and embed it inside existing care pathways. The average independent practice and early telehealth startup could not.

The internet improved bandwidth and lowered hardware costs, but telehealth remained mostly an add-on.  It was useful one the edge, but rarely central. In this decade, telehealth wasn’t blocked by a lack of video.  It was blocked by reimbursement, workflow fit, and trust.

III. Smartphones Change the Equation (2010–2015)

Smartphones did something the first internet wave couldn’t.  They made the endpoint universal.

By the end of 2010, the ITU estimated 5.3 billion mobile cellular subscriptions globally, including 940 million 3G subscriptions, with mobile network access available to about 90% of the world population (and 80% of rural populations).  By 2015, the ITU reported more than 7 billion mobile subscriptions and 3.2 billion internet users worldwide, with roughly 2 billion in developing countries. 

That shift changed “bandwidth” from a clinic problem to a population feature. It also changed user expectations: people stopped needing to learn “telehealth.” They already knew how to use a camera, send a message, or share a photo.

Cloud infrastructure mattered too. It became cheaper to run scheduling, messaging, and data storage without installing bespoke systems at every site. In high-income settings, telehealth could now augment mature healthcare infrastructure which included labs, pharmacies, payer systems, and credentialing. In emerging markets, the order was often reversed.

I witnissed firsthand how connectivity typically appeared before healthcare infrastructure. A village could have mobile signal and cheap Android devices long before it had reliable clinic hours, stable medication supply, or enough clinicians. Telehealth suddenly looked like a shortcut to care. But the shortcut still had to connect to something real.

However, one counterexample is instructive: many early mobile health efforts assumed constant connectivity, consistent device storage, and stable patient identity. They failed not because the idea was wrong, but because reliability (power, data, staffing) was still missing.

The key shift from 2010–2015 was that access became possible at scale. The hard truth was that possible is not the same as dependable. Smartphones solved the endpoint, but they didn’t solve the system.

IV. Building in the Real World (2013–2019): Telehealth Outside Ideal Conditions

For years, I worked on telehealth deployments in places where the “ideal conditions” assumed by many designs simply did not exist. Doctors were scarce. Clinics were unreliable. Paper systems dominated. And the ordinary failures of infrastructure were part of the care environment.

This is where typical assumptions quietly break.

Many telehealth models assume that care is delivered through predictable appointments, stable patient IDs, electronic records, and a pharmacy and lab network that reliably completes the plan. In the field, phone numbers change. People share devices. Names are spelled three different ways in three notebooks. Transport collapses the schedule. Backorders turn prescriptions into suggestions.

So telehealth had to become more boring and more operational.

We treated uptime like a clinical metric. If a system is down during the two hours a clinic has power, it may as well not exist. That forced an obsession with unglamorous engineering, such as offline queues, retries, local caching, and fallbacks to voice calls or SMS when data disappeared.

We also learned quickly that live video was a luxury. Asynchronous “store and forward” flows, which included structured history, photos, and vitals collected locally and reviewed later were the workhorse.  For this same reason, programs like Alaska’s AFHCAN leaned on asynchronous consultation. It fits unreliable data links. 

Many used to think video quality would decide whether telehealth worked. In practice, the decisive variable was clinician time. Anything that added even two extra steps (re-entering notes, struggling with logins, chasing a missing identifier, etc…) quietly killed adoption. The best telehealth systems didn’t feel like “remote care.” They felt like less friction around the same care.

Trust was not automatic either. In many settings, a local nurse’s presence carried more legitimacy than a remote physician on a screen. We kept local caregivers central and used telehealth to extend them, not replace them.

I watched more than one program fail after being designed like a policy presentation.  Some implementations assumed stable workflows, constant internet, and a trained workforce ready to change. Telehealth didn’t fail because it was impossible. It failed because it didn’t match the environment.

By 2019, the lesson was clear.  Telehealth works when it adapts to reality, not the reality of a funding proposal, but the reality of power cuts, staffing gaps, and human trust.

V. COVID as an Accelerant (2020–2021)

Many people first encountered telehealth during COVID.  However, COVID didn’t invent telehealth. It simply removed the option to keep treating it as optional.

On March 17, 2020, CMS announced it was expanding Medicare’s telehealth benefits using emergency authority (including 1135 waiver authority), waiving key limitations and allowing many beneficiaries to receive telehealth services in their homes.  Around the same time, HHS’s Office for Civil Rights issued enforcement discretion so clinicians could use certain non-public-facing audio/video tools in good faith without facing HIPAA penalties during the public health emergency. 

Those were structural moves.  They changed billing and reduced compliance friction quickly.

The utilization jump was enormous. One HHS/ASPE analysis described a 63-fold increase in Medicare fee-for-service telehealth visits from ~840,000 in 2019 to 52.7 million in 2020.  The same analysis highlighted how telehealth concentrated differently across specialties. In 2020, telehealth visits made up roughly a third of behavioral health specialist visits, versus 8% for primary care and 3% for other specialists. 

But it’s important to be honest about what changed and what didn’t.

Permission, payment, and cultural legitimacy changed. Clinicians and patients suddenly shared a reason to avoid waiting rooms. A telehealth visit stopped feeling like a novelty and started feeling like a responsible substitute.

However, fragmentation, workforce shortages, and inequities in connectivity didn’t change. Many programs were improvised under pressure. Audio-only visits filled gaps where video failed.  “Telehealth” often meant “we found a way to talk to you,” not “we redesigned care.”

All that progress aside, much care that required hands-on exams, procedures, labs, or imaging did not magically become virtual. Telehealth substituted for some care and preserved continuity for many patients, but it did not replace the physical system.

COVID made telehealth widespread. The post-COVID era would decide whether it became integrated.

VI. The Post-COVID Sorting (2022–2024): What Survived

After the emergency peak, telehealth entered a sorting period. Temporary adoption is not the same as structural integration.

The U.S. COVID‑19 public health emergency expired on May 11, 2023.  Yet many Medicare telehealth flexibilities outlived the PHE because Congress extended them. CMS guidance notes that Section 4113 of the Consolidated Appropriations Act, 2023 extended many Medicare telehealth flexibilities through December 31, 2024, while making some provisions permanent. 

Regulators also tightened their focus on risk. The HHS Office of Inspector General reported that in the first pandemic year, more than 28 million Medicare beneficiaries used telehealth, and it flagged billing patterns that raised program integrity concerns.  This was an early preview of why “telehealth at scale”, despite the convenience, would bring scrutiny.

Controlled substance prescribing became a flashpoint. DEA (jointly with HHS) extended COVID-era telemedicine flexibilities for prescribing controlled medications through December 31, 2024, citing the need to avoid care disruption while permanent rules were developed. 

Meanwhile, the market corrected. Some companies built for pandemic conditions struggled when demand normalized and reimbursement uncertainty returned. Babylon Health, for example, filed for Chapter 7 bankruptcy in the U.S. in August 2023.  Teladoc, a major player, recorded large impairment charges in 2022 as expectations reset for digital health assets. 

What survived was telling: 

  • Mental health, where conversation is often the core intervention and telehealth reduces travel friction.

  • Chronic care follow-up and condition management, where outcomes can be maintained without constant in-person visits.

  • Triage and navigation, directing people to the right level of care instead of defaulting to the emergency department.

  • Hybrid care, where telehealth is the front door and in-person care handles exams, imaging, and procedures.

Peer-reviewed studies before and after the pandemic support the idea that telemedicine can be noninferior to in-person care for certain conditions (and useful in chronic disease management), when the pathway is designed appropriately. 

Here’s the infrastructure test many keep coming back to: a thing becomes infrastructure when downtime is treated as a failure of care, not a product bug. By 2024, the surviving telehealth models were the ones that behaved like that.  They were embedded, accountable, and operational.

VII. Telehealth Today: Mature, Boring, and Essential

Telehealth today is not a single product. It’s a set of channels that, when well-integrated, forms an access layer for care: phone, video, asynchronous messaging, and remote monitoring.

In plain terms, telehealth now acts like a load balancer for healthcare systems. It keeps the system from collapsing by moving the right problems to the cheapest safe channel. Some problems need a room, a hand on a patient, or a lab. Many don’t. Telehealth’s power is in handling the “many don’t” without breaking continuity.

It’s also a force multiplier for clinicians, but not in the science-fiction sense. It multiplies clinician reach by reducing wasted motion: unnecessary travel, avoidable in-person check-ins, and administrative dead time between steps of care.

Policy has not fully caught up, but it has moved. CMS has continued to track telehealth use and to specify which flexibilities are time-limited versus permanent, reflecting that telehealth is now part of the operational baseline, not an edge case. 

AI is beginning to integrate into telehealth workflows in practical, unglamorous ways.  For example, it can assist with drafting notes, summarizing visits, translating language, and sorting patient-submitted information.  More “reduce paperwork” than “replace clinicians.”

And plenty still breaks:

  • Reimbursement remains uneven and time-limited in key areas, creating planning risk. 

  • Incentives are misaligned: some systems still lose revenue when care shifts away from billable in-person encounters.

  • Fragmentation persists: telehealth can become yet another silo if it isn’t integrated with labs, referrals, and records.

  • Licensure still complicates cross-state practice, even as tools like the Interstate Medical Licensure Compact offer an expedited pathway for some physicians. 

A clear-eyed definition helps.

Telehealth today is: a distribution system for care including triage, follow-up, monitoring, and communication sembedded into real pathways.

Telehealth is not: a replacement for physical exams, a magic-bullet cure for clinician shortages, or a shortcut around broken payment models.

Telehealth is mature now, which mostly means it’s less exciting to talk about, and even more necessary to keep running.

VIII. What the Next Decade May Reward

The next decade won’t reward the most impressive telehealth ideas. It will reward the most reliable ones.

What lasts will be defined by reliability, integration and durability. 

The future belongs to operators, not pitch decks. Healthcare is a chain, and telehealth is only one link. The work is making the whole chain holds. 

Telehealth didn’t become real because the technology finally arrived. The technology arrived many times. Telehealth became real when systems (pushed by necessity, enabled by networks, and legitimized by policy) started treating distance as an ordinary condition rather than a special exception.

And if there’s a final lesson worth keeping, it’s the one embedded in that grainy photo from a flickering clinic: in healthcare, the systems that last will always outcompete the ideas that merely impress.

 

Building Things That Last

Most things that matter are built quietly.

Not because the work is hidden, but because real building rarely announces itself. It unfolds over time, under pressure, and usually without an audience.

I’ve spent much of my career building systems in environments where conditions were not forgiving. Capital was limited. Infrastructure was uneven. Assumptions broke often. There were no playbooks to follow and no guarantees of success.

Those environments teach you quickly what actually matters. They teach you that speed is not the same as progress. That visibility is not the same as credibility. And that narratives are rarely the same thing as reality.

When you build under real constraints, you stop optimizing for optics. You start optimizing for durability.

You begin asking different questions. Not about how things look, but about how they hold:

What survives stress? What fails first? Who absorbs pressure when systems break?

Those questions apply equally to companies, technology, and leadership.

In my work as a founder and tech entrepreneur, including my time building RingMD, the most consequential decisions were rarely the most visible ones. They were decisions about architecture, incentives, and long-term resilience. Decisions that didn’t generate attention in the moment, but quietly compounded over time.

Public narratives tend to compress complexity. That’s not a criticism. It’s simply a function of how stories are told. But anyone who has built something real knows that the essential work happens far from the headlines.

It happens in moments where there is no clear answer. Where information is incomplete. Where responsibility cannot be delegated.

In those moments, confidence matters less than judgment. Charisma matters less than clarity. And execution matters more than explanation.

Over time, I’ve become less interested in celebrating outcomes and more interested in studying what endures. What continues to function when conditions shift. What holds when pressure is applied repeatedly, not just once.

Experience has a way of stripping away the unnecessary. It clarifies what was built for attention and what was built to last.

In the end, substance compounds quietly, long after the noise has moved on.

Revolutionary Courage: The Unyielding Spirit of Samuel Whittemore

As I walked the streets near Cambridge, MA, the inscription on a weathered headstone caught my eye: “Near this spot Samuel Whittemore, then 80 years old, killed three British soldiers on April 19, 1775. He was shot, bayoneted, and beaten and left for dead but recovered and lived to be 98 years of age.”

The words seemed to leap off the stone, pulling me into a story from the American Revolution I’d long admired.

On April 19, 1775, as British soldiers retreated from Lexington and Concord, Whittemore positioned himself behind a stone wall in Menotomy (now Arlington, MA). Armed with a musket, two pistols, and a sword, he single-handedly took on the advancing troops. He fired his musket and brought down one British soldier. Without hesitation, he used his pistols to take down two more. With his ammunition spent, Whittemore didn’t stop. He drew his sword and continued to fight. The British soldiers, stunned by his defiance, responded with overwhelming force. They shot him in the face and bayoneted him repeatedly, leaving him for dead.

But Samuel Whittemore’s story didn’t end there.

Rescuers found him alive, still trying to reload his musket. He survived that brutal day and went on to live another 18 years, passing away at the age of 96. Whittemore’s courage and determination remind us that the call to serve our country and our communities knows no age, no limitation, no boundary.

I was preparing to give a speech at MIT, but in this moment, the modern world faded away. Here, among the fallen leaves and the silence of history, I stood at the grave of a man who defied the odds and became a symbol of resilience and courage—traits as vital today as they were two centuries ago. Whittemore’s legacy reminded me that the call to serve and stand for something greater knows no age or circumstance.

Samuel Whittemore didn’t let age hold him back when he became a hero of the American Revolution. At 78, an age when many would be content to reflect on a life well-lived, Whittemore chose to fight.  He became the oldest known combatant in the Revolutionary War.

Walking through Milk Row Cemetery, I felt connected to a lineage of service and sacrifice. As I read the inscription on Whittemore’s grave, I couldn’t help but feel a deep sense of awe. Here was a man who, at 78, chose to fight for the future—a future that I’m a part of today. His actions that day were about more than defending his home—they were about standing up for something greater: freedom, justice, and the right to shape our own future. It made me think about the legacy we all leave behind and the contributions we make, no matter our age or situation.

This isn’t just a lesson from history; it’s a call to action. In today’s world, where challenges seem ever-present and the road ahead often uncertain, Whittemore’s story speaks directly to us. His example urges us to find ways to serve, regardless of age, circumstance, or station in life. Whether through small acts of kindness or grand gestures of sacrifice, we are all called to contribute to the greater good. This spirit of service has defined our nation and will continue to propel us forward.

Whittemore’s actions echo the words of John Adams, who once said, “Our obligations to our country never cease but with our lives.” Whittemore took these words to heart, and his legacy reminds us that our duty to contribute never truly ends. And it’s not just an American tale—it’s a human one. Whittemore’s story speaks to anyone who believes in standing up for what’s right, in contributing to something greater than oneself. In today’s global world, his legacy transcends borders, reminding us all that courage and service are universal values.

Leaving Milk Row Cemetery and heading to MIT, I felt a renewed sense of purpose. Whittemore’s story reminded me that true service isn’t just about words or symbols—it’s about action. It’s about stepping forward, even when the odds seem insurmountable, and doing our part to ensure that the ideals we hold dear endure for future generations.

As I stood before the audience at MIT, I knew that each of us has a role to play in the ongoing story of our nation. Samuel Whittemore’s life serves as a powerful example of what one person can achieve, regardless of age or circumstance. His legacy calls each of us to rise to the occasion, to serve with courage, and to contribute in whatever way we can to the enduring story of America.

SC must boost cybersecurity strategy

I refreshed the screen, and I saw it: zero balance. I refreshed my crypto wallet again. Gone.

Despite being immersed in the tech world for nearly two decades, I fell victim to a sophisticated cyberattack. It can happen to any of us.

In today’s interconnected world, cybersecurity is no longer a luxury. It’s a necessity. The growing reliance on digital technology has transformed how we live, work and play. However, it is not without its risks.

Cybercriminals continue to find new ways to exploit vulnerabilities and threaten the security of individuals and businesses across the state.

According to the FBI’s 2022 Internet Crime Report, thousands of South Carolinians were victims of cybercrime, with aggregate losses of more than $100 million in 2022 alone. This more than doubled 2021’s losses of nearly $43 million.

In 2012, hackers attacked South Carolina’s Department of Revenue and stole nearly 3.8 million tax records.

After the breach, our state made immediate efforts to improve cybersecurity standards. However, South Carolina’s Information Security Program Master Policy and Handbook have not been updated since 2014.

More can and should be done.

We could start by developing a comprehensive cybersecurity strategy, broadening public-private partnerships and perhaps even establishing a dedicated cybersecurity agency.

By working together, we can allocate the necessary resources and implement robust defense mechanisms to ensure a safer digital future for all individuals and businesses in South Carolina.

As for the lost crypto, it was an expensive but valuable lesson that we can all take to heart.