Letting People See the Work Before It’s Finished

There’s a moment in building something where you have to decide who you’re optimizing for.

Early on, it’s just you. Then maybe a small set of ideas, sketches, or prototypes. Eventually, you reach a point where the work is real enough that keeping it completely private starts to hurt more than it helps.

I think I’m at that point now.

The Tension Between Polish and Learning

There’s a strong pull to wait until something feels “done” before letting anyone see it. Polished. Documented. Fully formed.

The problem is that polish often hides the most important feedback.

What I care about right now isn’t applause or adoption. It’s learning. I want to know where people pause, what they misunderstand, and which assumptions don’t survive first contact with reality.

That kind of insight doesn’t come from dashboards or signups. It comes from thoughtful eyes on unfinished work.

Choosing Who Gets to Look

I’m not interested in opening the floodgates yet. Early feedback shapes products, whether you want it to or not, and I’m being intentional about who helps shape this one.

Peers. Builders. People who understand tradeoffs. People who are comfortable saying, “This part feels off,” instead of just asking for features.

That’s who I want looking right now.

Comfortable, Not Rushed

Letting people see something before it’s ready isn’t about being early. It’s about being honest about where the work actually is.

I’m comfortable with people looking. I’m comfortable with it being incomplete. I’m comfortable saying, “This is close, but not finished.”

What I’m not interested in is rushing past that phase just to say it’s launched.

There will be a time to open the door wider. I’m not there yet.

But I’m close enough now that letting people see the shape of it feels like the right next step.

Why I’m Finally Ready to Build This Solution

I’ve had ideas like this before.

Over the years, there have been plenty of moments where a new technology showed up and people rushed to declare that everything was about to change. Sometimes they were right. Sometimes they were early. Sometimes they were just obnoxious and faded out with the fad.

More than once, I chose not to act.

Not because I couldn’t build something, but because it didn’t feel like the right moment. The pieces weren’t there yet. Or the problem was still being solved well enough by humans. Or the solution would have created as many issues as it fixed.

My dad was a video engineer by trade. He was one of, if not the first, people in Kansas City trained to run a slow-motion reel-to-reel machine. His experiences with new and emerging technologies helped shape how I think about when to get involved, and when to sit on the sidelines.

When Chyron video technology began emerging in the 1960s and 1970s, it was used sparingly. My dad wasn’t trained on Chyron yet, but he had an idea. He took a small video camera, mounted it on a tripod, and pointed it at the scoreboard during a baseball game.

Later, he did the same thing for football and other sports.

That simple workaround helped change what viewers came to expect from televised sports. In a nine-inning baseball game, it’s nice to know the inning, the score, and the time at any moment. Today, those elements are permanently embedded on your screen, so normal you don’t even notice them. They’re expected.

My dad saw a technology that wasn’t being used in the best way possible, and he acted at the right moment.

Experience has a way of teaching you when to move. He was right.

It seems like just yesterday

It seems like just yesterday, but I’ve lived through multiple waves of tooling shifts myself. Each one promised to simplify software development. Each one delivered real gains, along with new kinds of friction.

What never really went away was the same underlying problem:
humans doing invisible coordination work between systems that don’t quite understand each other.

We learned to live with it. We staffed around it. We normalized it.

For a long time, that was the right call.

Why This Time Feels Different

What’s changed isn’t just the technology. It’s the combination of things finally lining up, and the growing awareness of the gaps that still need to be filled.

We now have systems that can reason just enough to participate in work, not just execute it. We have workflows that can adapt instead of forcing everything down a single happy path. And we’re finally talking openly about the cost of context switching, glue work, and human babysitting of software.

More importantly, we’ve learned what doesn’t work.

Blind automation doesn’t scale judgment. More tools don’t automatically create clarity. And faster output doesn’t guarantee better outcomes.

Those lessons matter.

Waiting Was Part of the Work

If I’m honest, part of being ready now comes from knowing what I don’t want to build.

I don’t want another system that just moves work faster without understanding it. I don’t want something that replaces human judgment instead of supporting it. And I don’t want to rush something into the world just because the timing feels exciting.

I waited until it felt necessary, not just possible.

Close, But Not Quite There Yet

I’m finally at a point where it feels okay to say that I’m building something. In truth, I have been for months.

I’ll be opening a beta soon. I can’t say exactly when yet. But I’m close enough now that the direction is clear and the product is taking its final shape.

For the first time in a long time, it feels like the right moment to act.

What I’m Paying Attention to as We Head Into the New Year

The stretch between Christmas and the New Year has always been a strange and useful pause.

The calendar hasn’t flipped yet, but the pressure eases just enough to think. Projects slow down. Meetings drop off. You get a little space to reflect without immediately turning that reflection into a plan.

This is usually when I take stock of what’s actually worth paying attention to.

Less Noise, More Signal

There’s no shortage of predictions right now. Every week brings another “AI will change everything” headline, another tool launch, another bold claim about the future of work.

Most of it is noise.

What I’m paying closer attention to is quieter:

  • Where teams are still struggling, even with better tools
  • Where automation helps, but also where it gets in the way
  • How often humans are still doing invisible glue work between systems
  • And which problems keep showing up no matter how advanced the tech gets

Those patterns matter more than any single product announcement.

The Gap That Keeps Showing Up

One thing I keep seeing is a growing gap between capability and clarity.

We have systems that can generate code, route work, summarize decisions, and automate entire workflows. But many teams are still unclear about why certain work exists, who should make which decisions, and when software should act versus pause.

More capability doesn’t automatically lead to better outcomes. In some cases, it just makes existing problems happen faster.

That’s the space I find most interesting right now.

What I’m Intentionally Not Rushing

There’s a strong pull at the start of a new year to rush toward answers.

I’m resisting that.

Some problems benefit from speed. Others benefit from sitting with them a little longer. Understanding how people actually work, where judgment shows up, and where things fall apart, takes time.

I’m okay with that.

Looking Ahead, Quietly

As we move into the new year, I’ll be sharing more of what I’m observing as these ideas take shape. Not polished conclusions, but real thinking in progress.

If you’re curious where that goes next, I’ve started talking out loud in a few places beyond this blog. You’ll find links on the site if you want to follow along.

No pressure. No sign-ups. Just conversation.

Sometimes the most useful thing at the start of a new year is simply paying attention.

The Most Valuable Skill I’ve Seen Engineers Lose, and Why It Matters Now

This time of year has a way of slowing things down.

As winter settles in and we all spend a little more time with family and friends, there’s often a moment when the noise quiets just enough to think. For most people, I’m sure it’s about the magic of the holidays, time with family and friends, or maybe even a warm-weather vacation. For me, my head wanders to my work. Not the next sprint or the next feature, but what actually matters over the long haul.

This year, that reflection keeps coming back to a simple question:

What really makes a great engineer?

After decades in this industry, I’ve seen tools change, paradigms shift, and entire job descriptions come and go. But I’ve also watched one critical skill slowly fade into the background.

And right now, that skill matters more than ever.

It Isn’t Coding Speed or Tool Knowledge

The most valuable skill I’ve seen engineers lose isn’t typing speed, language fluency, or familiarity with the latest framework.

It’s not PR velocity or anything measured by DORA metrics. It’s definitely not who has the deepest front-end framework expertise.

All of those things are valuable. But something else is more important.

It’s the ability to reason through ambiguity.

When I was coming up, we didn’t have the luxury of abstraction layers everywhere. If something didn’t work, you traced it. You reasoned about it. You figured out why.

I’ve mentioned before that I used to test on algorithms in my hiring assessments. They mattered. Not because engineers would be implementing them every day, but because algorithms expose reasoning, tradeoffs, and comfort with uncertainty.

The final part of my assessment was a four-question story titled “One Bad Day.” In it, engineers were faced with real-world problems and incomplete information. There were no right or wrong answers. What mattered was how they handled uncertainty when confronted with it.

Those questions revealed how someone thinks when there isn’t a clear path forward.

As software evolved, we got very good at assembling systems. We got much less comfortable sitting with ambiguity.

Abstraction Is Powerful, But It Has a Cost

Modern tools are incredible. They let us build faster and bigger than ever before. But they also hide complexity, and when complexity is hidden long enough, people forget it exists.

That’s how we end up with engineers who are productive, but uneasy the moment something doesn’t behave as expected. When the happy path breaks, the thinking muscle hasn’t been exercised.

AI accelerates this trend if we’re not careful.

Why This Skill Matters More Now, Not Less

There’s a fear that AI will do the thinking for us.

I believe the opposite.

AI is very good at producing output. It’s much worse at knowing when that output is wrong, incomplete, or misaligned with intent. That gap is where human reasoning becomes invaluable.

The real present in this new era isn’t faster code generation. It’s the opportunity to refocus engineers on judgment, evaluation, and problem framing.

Those are learned skills. They compound over time. And they don’t disappear when tools change.

The Gift That Actually Lasts

As you head into the end of the year, maybe while you’re opening presents or just enjoying a quieter moment, this is the thing worth investing in.

The greatest gift you can give yourself as an engineer isn’t another tool or certification. It’s the willingness to slow down, sit with ambiguity, and reason your way through it.

That skill never goes out of style.

A Peek Under the Hood, How I Think About Building the Next Generation of Dev Tools

When I first got into software, there were no libraries waiting for us. No package managers. No Stack Overflow. No copy-paste from GitHub.

If you wanted to build something, you built it. Top to bottom.

If I needed to talk to another system, I opened a socket. I connected via TCP. I handled the protocol. If something failed, I debugged it at the wire level. Applications were handcrafted end to end, and you were responsible for everything you shipped.

That experience shaped how I think about software to this day.

From Building Software to Assembling Software

As my career progressed and I started hiring engineers, I noticed something changing. We stopped teaching people how software works and started teaching them how to assemble it.

That’s not necessarily bad, but it does change the skill set.

When I built hiring assessments, one of the things I always tested was algorithms. Not because I expected everyone to be writing sorting routines every day, but because algorithms tell you how someone thinks. They reveal reasoning, tradeoffs, and how a person approaches a problem when there isn’t already a solution handed to them.

Over time, that kind of thinking became less emphasized. We got very good at wiring packages together. We got worse at understanding what was actually happening underneath.

That shift matters more than most people realize.

Automation Isn’t New, Context Is

Long before AI entered the conversation, I used to tell teams the same thing over and over. I can automate almost anything, but we need to start with people.

You let humans run the process first. You iron out the edge cases. You understand where things break. Then you automate the boring, repeatable parts and kick the exceptions back to humans.

That model has worked for me for decades.

What’s new now isn’t automation. What’s new is that software can finally understand a little bit of context. Not perfectly, but enough to participate instead of blindly executing instructions.

That distinction is everything.

The Real Cost of Babysitting Software

At one company, I had a peer who was incredibly good at keeping things running. Customers were happy. Issues were handled quickly. From the outside, everything looked fine.

When he eventually left, I dug into the system and realized how much time had been spent babysitting instead of fixing root problems. Not because he didn’t care, but because there simply weren’t enough hours in the day.

The company didn’t push for deeper fixes because the work was getting done. In reality, they were paying a premium for reactive support instead of investing in durable solutions.

That experience stuck with me.

Software that constantly needs human babysitting isn’t efficient. It’s expensive. And worse, it hides the real cost behind “everything is working.”

How My Thinking Has Changed

Today, I care less about features and more about intent.

Where does intent originate?
How is it preserved as work moves through systems?
What happens when that intent becomes unclear?
And when should software act versus pause and ask for help?

These questions matter more to me now than which framework is trending.

Some principles I won’t compromise on anymore:

  • Tools should preserve intent, not just produce output.
  • Automation without context just accelerates the wrong outcomes.
  • Humans are not a failure mode, they are part of the design.
  • Software should adapt to people, not force people to adapt to it.

Why This Is Finally Possible

For most of my career, software was deterministic because it had to be. That made context preservation incredibly hard.

Now we have systems that can reason, summarize, adapt, and operate with uncertainty. We can design workflows that branch. That pause. That escalate. That involve humans when confidence drops.

This doesn’t replace judgment. It finally gives judgment a place to live inside the system.

A Quiet Tease

I’m working on something that reflects all of this.

It’s shaped by decades of building software the hard way, watching teams struggle with invisible friction, and seeing how often people end up doing work that tools should have handled better.

I’m not ready to talk about it publicly yet.

But if modern dev tools feel powerful and exhausting at the same time, we’re probably thinking about the same problem.

More soon.

From Monochrome to Machine Learning: Reflecting on a Lifetime in Tech Before the Next Leap

The first computer most people encounter today fits in their pocket.
Mine filled most of a desk. Actually, a desk and a printer cabinet next to it.

My very first computer experiences were on our school’s Apple I machines, playing Oregon Trail on either black and white TVs or bright green monochrome monitors. Later at home, in the late 1980s, we got an Epson Equity I+ with a 10 MHz CPU, 640K of RAM, a 10 MB hard drive, a 5.25 inch floppy drive, and an EGA monitor that felt like stepping into full color after years of monochrome. If you lived through that transition, you remember how magical it felt. Not just more pixels, but more possibility.

Since then I have watched wave after wave of transformation reshape software and the world around it.

  • The CPU evolution, from the simplicity of that first 10 MHz machine to modern processing stacks capable of teraflops.
  • Gaming advances from Pac-Man at the arcade or on Atari at home, to Super Mario on Nintendo, from 8 bit to 16 bit, to 64 bit, and eventually to fully immersive 3D experiences.
  • The rise of the internet, starting with dial up BBS systems and rapidly becoming the global information network we rely on today.
  • Communication shifting from letters and postcards to electronic messaging, long before anyone imagined what smartphones would become.
  • Social platforms growing from AOL chatrooms and small forums to worldwide communities where billions share ideas.
  • The mobile phone evolving from a clunky brick attached to a briefcase to a pocket sized computer that dwarfs the power of my first PC.
  • Music and movies becoming instantly accessible. No more waiting for release days or storing shelves full of physical media.
  • Breakthroughs in biology that finally let us map DNA and understand life in new ways.
  • Automation taking form in 3D printing, robotics, and smart home technology.
  • Machine learning taking root, followed by modern generative AI that can now write, reason, and assist in ways we only imagined in science fiction.

And these are just the highlights. After 25 years in the industry and a lifetime of tinkering, the list is a long one.

What is interesting about right now, especially with 2026 on the horizon, is that it feels like another one of those rare pivot points. Except this time the shift is not about faster processors or prettier screens. It is about the relationship between humans and software fundamentally changing.

We are entering an era where tools are not just tools. They are partners. They help us think, reason, and build. They fill gaps. They accelerate ideas. They lift creativity instead of replacing it.

For the first time in my career, I am building something that sits directly in the center of that shift. Something that focuses on how to evolve the bigger picture rather than profit from it. Something designed with everyone in mind, not tied to advancing any particular agenda.

I will be announcing something big very soon for software development. At least, I think it is big. For now, I will simply say this:

Everything I have learned since that Epson Equity, from the GW-BASIC and MS-DOS 3.2 manuals, through distributed computing, microservices, and machine learning, all the way to modern AI, has been quietly converging into a project I have been shaping for a very long time. I just did not realize it until recently.

It is close now.

And once it is ready, I believe it will change how a lot of people think about building software, and possibly grow into something far larger than that.