The Most Valuable Skill I’ve Seen Engineers Lose, and Why It Matters Now

This time of year has a way of slowing things down.

As winter settles in and we all spend a little more time with family and friends, there’s often a moment when the noise quiets just enough to think. For most people, I’m sure it’s about the magic of the holidays, time with family and friends, or maybe even a warm-weather vacation. For me, my head wanders to my work. Not the next sprint or the next feature, but what actually matters over the long haul.

This year, that reflection keeps coming back to a simple question:

What really makes a great engineer?

After decades in this industry, I’ve seen tools change, paradigms shift, and entire job descriptions come and go. But I’ve also watched one critical skill slowly fade into the background.

And right now, that skill matters more than ever.

It Isn’t Coding Speed or Tool Knowledge

The most valuable skill I’ve seen engineers lose isn’t typing speed, language fluency, or familiarity with the latest framework.

It’s not PR velocity or anything measured by DORA metrics. It’s definitely not who has the deepest front-end framework expertise.

All of those things are valuable. But something else is more important.

It’s the ability to reason through ambiguity.

When I was coming up, we didn’t have the luxury of abstraction layers everywhere. If something didn’t work, you traced it. You reasoned about it. You figured out why.

I’ve mentioned before that I used to test on algorithms in my hiring assessments. They mattered. Not because engineers would be implementing them every day, but because algorithms expose reasoning, tradeoffs, and comfort with uncertainty.

The final part of my assessment was a four-question story titled “One Bad Day.” In it, engineers were faced with real-world problems and incomplete information. There were no right or wrong answers. What mattered was how they handled uncertainty when confronted with it.

Those questions revealed how someone thinks when there isn’t a clear path forward.

As software evolved, we got very good at assembling systems. We got much less comfortable sitting with ambiguity.

Abstraction Is Powerful, But It Has a Cost

Modern tools are incredible. They let us build faster and bigger than ever before. But they also hide complexity, and when complexity is hidden long enough, people forget it exists.

That’s how we end up with engineers who are productive, but uneasy the moment something doesn’t behave as expected. When the happy path breaks, the thinking muscle hasn’t been exercised.

AI accelerates this trend if we’re not careful.

Why This Skill Matters More Now, Not Less

There’s a fear that AI will do the thinking for us.

I believe the opposite.

AI is very good at producing output. It’s much worse at knowing when that output is wrong, incomplete, or misaligned with intent. That gap is where human reasoning becomes invaluable.

The real present in this new era isn’t faster code generation. It’s the opportunity to refocus engineers on judgment, evaluation, and problem framing.

Those are learned skills. They compound over time. And they don’t disappear when tools change.

The Gift That Actually Lasts

As you head into the end of the year, maybe while you’re opening presents or just enjoying a quieter moment, this is the thing worth investing in.

The greatest gift you can give yourself as an engineer isn’t another tool or certification. It’s the willingness to slow down, sit with ambiguity, and reason your way through it.

That skill never goes out of style.

A Peek Under the Hood, How I Think About Building the Next Generation of Dev Tools

When I first got into software, there were no libraries waiting for us. No package managers. No Stack Overflow. No copy-paste from GitHub.

If you wanted to build something, you built it. Top to bottom.

If I needed to talk to another system, I opened a socket. I connected via TCP. I handled the protocol. If something failed, I debugged it at the wire level. Applications were handcrafted end to end, and you were responsible for everything you shipped.

That experience shaped how I think about software to this day.

From Building Software to Assembling Software

As my career progressed and I started hiring engineers, I noticed something changing. We stopped teaching people how software works and started teaching them how to assemble it.

That’s not necessarily bad, but it does change the skill set.

When I built hiring assessments, one of the things I always tested was algorithms. Not because I expected everyone to be writing sorting routines every day, but because algorithms tell you how someone thinks. They reveal reasoning, tradeoffs, and how a person approaches a problem when there isn’t already a solution handed to them.

Over time, that kind of thinking became less emphasized. We got very good at wiring packages together. We got worse at understanding what was actually happening underneath.

That shift matters more than most people realize.

Automation Isn’t New, Context Is

Long before AI entered the conversation, I used to tell teams the same thing over and over. I can automate almost anything, but we need to start with people.

You let humans run the process first. You iron out the edge cases. You understand where things break. Then you automate the boring, repeatable parts and kick the exceptions back to humans.

That model has worked for me for decades.

What’s new now isn’t automation. What’s new is that software can finally understand a little bit of context. Not perfectly, but enough to participate instead of blindly executing instructions.

That distinction is everything.

The Real Cost of Babysitting Software

At one company, I had a peer who was incredibly good at keeping things running. Customers were happy. Issues were handled quickly. From the outside, everything looked fine.

When he eventually left, I dug into the system and realized how much time had been spent babysitting instead of fixing root problems. Not because he didn’t care, but because there simply weren’t enough hours in the day.

The company didn’t push for deeper fixes because the work was getting done. In reality, they were paying a premium for reactive support instead of investing in durable solutions.

That experience stuck with me.

Software that constantly needs human babysitting isn’t efficient. It’s expensive. And worse, it hides the real cost behind “everything is working.”

How My Thinking Has Changed

Today, I care less about features and more about intent.

Where does intent originate?
How is it preserved as work moves through systems?
What happens when that intent becomes unclear?
And when should software act versus pause and ask for help?

These questions matter more to me now than which framework is trending.

Some principles I won’t compromise on anymore:

  • Tools should preserve intent, not just produce output.
  • Automation without context just accelerates the wrong outcomes.
  • Humans are not a failure mode, they are part of the design.
  • Software should adapt to people, not force people to adapt to it.

Why This Is Finally Possible

For most of my career, software was deterministic because it had to be. That made context preservation incredibly hard.

Now we have systems that can reason, summarize, adapt, and operate with uncertainty. We can design workflows that branch. That pause. That escalate. That involve humans when confidence drops.

This doesn’t replace judgment. It finally gives judgment a place to live inside the system.

A Quiet Tease

I’m working on something that reflects all of this.

It’s shaped by decades of building software the hard way, watching teams struggle with invisible friction, and seeing how often people end up doing work that tools should have handled better.

I’m not ready to talk about it publicly yet.

But if modern dev tools feel powerful and exhausting at the same time, we’re probably thinking about the same problem.

More soon.

From Monochrome to Machine Learning: Reflecting on a Lifetime in Tech Before the Next Leap

The first computer most people encounter today fits in their pocket.
Mine filled most of a desk. Actually, a desk and a printer cabinet next to it.

My very first computer experiences were on our school’s Apple I machines, playing Oregon Trail on either black and white TVs or bright green monochrome monitors. Later at home, in the late 1980s, we got an Epson Equity I+ with a 10 MHz CPU, 640K of RAM, a 10 MB hard drive, a 5.25 inch floppy drive, and an EGA monitor that felt like stepping into full color after years of monochrome. If you lived through that transition, you remember how magical it felt. Not just more pixels, but more possibility.

Since then I have watched wave after wave of transformation reshape software and the world around it.

  • The CPU evolution, from the simplicity of that first 10 MHz machine to modern processing stacks capable of teraflops.
  • Gaming advances from Pac-Man at the arcade or on Atari at home, to Super Mario on Nintendo, from 8 bit to 16 bit, to 64 bit, and eventually to fully immersive 3D experiences.
  • The rise of the internet, starting with dial up BBS systems and rapidly becoming the global information network we rely on today.
  • Communication shifting from letters and postcards to electronic messaging, long before anyone imagined what smartphones would become.
  • Social platforms growing from AOL chatrooms and small forums to worldwide communities where billions share ideas.
  • The mobile phone evolving from a clunky brick attached to a briefcase to a pocket sized computer that dwarfs the power of my first PC.
  • Music and movies becoming instantly accessible. No more waiting for release days or storing shelves full of physical media.
  • Breakthroughs in biology that finally let us map DNA and understand life in new ways.
  • Automation taking form in 3D printing, robotics, and smart home technology.
  • Machine learning taking root, followed by modern generative AI that can now write, reason, and assist in ways we only imagined in science fiction.

And these are just the highlights. After 25 years in the industry and a lifetime of tinkering, the list is a long one.

What is interesting about right now, especially with 2026 on the horizon, is that it feels like another one of those rare pivot points. Except this time the shift is not about faster processors or prettier screens. It is about the relationship between humans and software fundamentally changing.

We are entering an era where tools are not just tools. They are partners. They help us think, reason, and build. They fill gaps. They accelerate ideas. They lift creativity instead of replacing it.

For the first time in my career, I am building something that sits directly in the center of that shift. Something that focuses on how to evolve the bigger picture rather than profit from it. Something designed with everyone in mind, not tied to advancing any particular agenda.

I will be announcing something big very soon for software development. At least, I think it is big. For now, I will simply say this:

Everything I have learned since that Epson Equity, from the GW-BASIC and MS-DOS 3.2 manuals, through distributed computing, microservices, and machine learning, all the way to modern AI, has been quietly converging into a project I have been shaping for a very long time. I just did not realize it until recently.

It is close now.

And once it is ready, I believe it will change how a lot of people think about building software, and possibly grow into something far larger than that.