Andrii Yasinetsky

The Sovereign Engineer

Here’s my essay on the impact of AI on engineering, skills, hiring and job market that I also shared with my team at Diadia Health at the start of the year.

I have been thinking about this at both micro and macro levels - this is my attempt to step back, ground my thoughts in what actually changed this year, and be explicit about how I think the work of an engineering and research orgs will evolve.

What Actually Changed in 2025

The biggest shift last year was not “better answers” or “faster coding.” It was time horizon.

Models crossed a threshold where they can now stay coherent and productive on a task for hours. Not minutes. Not a quick diff. Hours.

That sounds incremental until you live inside it. Don’t just think about Opus 4.5 - start thinking about 5-6 and how impactful it’s going to be for the work you are doing today.

The amount of work per hour with these tools simply would not have been possible a year ago. Not even close. A year ago I was typing almost all of my code into an editor myself. In the last two months I barely touched it.

The bottleneck wasn’t ideas or code. It was me reviewing and directing output.

We Are Now the Slowest Part of the System

The output capacity of these agents now massively exceeds a single human’s ability to:

  • Review
  • Sanity-check
  • Integrate
  • Decide what to do next

This flips the job, and consequently the expectations from job candidates.

Our role in 2026 is not primarily “writing software” in the traditional sense. Instead, it is removing ourselves as the bottleneck by designing systems that do large chunks of the work for us while maintaining a high quality bar and taste.

The quality of decisions you make while using agents now has far greater impact on the output than before. One wrong decision and you end up with thousands of lines of AI slop that will take you forever to review and change. It is much easier to make small tactical changes correcting a model’s output that was originally based on optimal decisions, than rewinding an entirely wrong path. You still need to apply the right judgment about output and know when things don’t look right. That’s a skill.

The Real Question Going Into 2026

The question is no longer: How do we use AI to solve problem X?

It’s: How do we use AI to build systems that build better AI to solve problem X?

That’s a fundamentally different framing. And it creates massive leverage for small, highly skilled teams.

What Teams Look Like Now

At Diadia Health we drastically changed the composition of our team and responsibilities as of late 2025. When we started two years ago we followed a more traditional approach by dividing zones of responsibilities by expertise, such as backend, frontend, infra, AI etc. Folks who worked on web app had direct dependency on the backend team and so on.

But recently we restructured the team by blending zones of expertise into one or two comprehensive roles, requiring that most of the team work across the entire stack. One exception is AI research which still remains largely highly specialized.

I call it - Sovereign engineer.

That also meant that we had to part ways with a few colleagues who happened to be less adaptable to this new reality. That wasn’t easy. These were good people. But the role they were hired for no longer exists.

As a result, the team ships at a much higher velocity now where everyone owns the entire projects end to end from front-end, to backend, to infra, to AI orchestration. AI makes it all possible.

I believe the future for teams and candidates will look like an extreme barbell distribution:

1. Highly agentic “AI-native” juniors (10x impact)

They are almost never blocked by “how”. They can get working code for nearly anything in minutes or hours.

What blocks them is: not knowing what to ask, not recognizing when something works but is wrong, not having the taste to know “this will bite us later”.

This is where the next group helps to close the gap.

2. Highly skilled “AI-empowered” professionals (100x impact)

They carry a mental model of the whole system - technical, organizational, temporal. They know which abstractions are load-bearing and which are decorative. They have taste. They start new high impact projects and initiatives (high agency) and they ship a lot of high quality high impact stuff in parallel across the entire stack daily.

In the team setting they also look at the problem and say:

  • We shouldn’t build this feature at all
  • Delete these 8,000 lines instead of fixing them
  • This entire system is the wrong abstraction - we need to step back
  • This is technically correct but will confuse every engineer who touches it next year
  • This “minor bug” is a symptom of a fundamental problem
  • The AI’s approach works but won’t survive 10x scale

There is no more room left for mediocrity, average skill set, low agency. To stay competitive you now have to optimize for the extremes of the spectrum.

What This Means for Team Composition

I used to follow the classic team skillset distribution: optimize around different skills, levels of experience, mentorship hierarchy.

That was a great way to build effective teams for a long time. In the age of AI it is no longer relevant.

Things that matter most now:

  • Agency
  • Creativity
  • Breadth of knowledge and experience overall AND depth in at least one or two areas
  • Ability to single-handedly execute complex projects end to end with little oversight

Mentorship fundamentally changes shape as well. The old model was a ladder. Senior teaches mid-level, mid-level teaches junior. Knowledge trickled down through layers. The middle existed partly as a mentorship mechanism - a place where people learned by doing progressively harder work under decreasing supervision.

The new model looks more like apprenticeship. Direct transmission from expert to novice, with AI handling the middle layer of “how do I implement this?” questions. The junior no longer needs a human to explain syntax, patterns, or basic architecture. The AI does that, endlessly and patiently.

What the junior still needs from a mentor:

  • Judgment under ambiguity. When the AI gives you three plausible approaches, how do you choose? That’s taste and experience.
  • Knowing what questions to ask. While AI gives you direct answers, seniors know what you should have asked but didn’t.
  • Permission to trust your judgement. When to override the AI. When your instinct is right even if you can’t explain why.

The risk is that mentorship atrophies entirely. And I don’t know if it’s completely true, but it is definitely a plausible outcome. Seniors are busy being 100x, while juniors are expected to be self-sufficient with AI.

How Hiring Changes

The question now is how to optimize your hiring for these kinds of candidates.

Anthropic recently open-sourced a take-home where candidates are tasked with outperforming AI (Opus 4.5 at the time) on a kernel performance optimization task. I strongly believe this is the future of hiring.

At Diadia we have been using take-homes for a while, but with AI the baseline of expectations moved up significantly.

A candidate’s ability to outperform AI on at least one axis is now the norm. Depending on the role this axis will vary, but it will be a key component in evaluating the best candidates in the future.

We found that the highest signal to noise ratio results from candidate submissions came when the take-home was unbounded in scope, having only the minimum baseline defined.

Essentially, we are asking candidates to get as far as they can with the implementation within a given timeframe. It used to be a day, but depending on the role or complexity it could be variable.

As such, one candidate who really impressed us and joined the team recently was able to build a fully functional simplified version of our core AI product, with design, backend, and core algorithm, and even tests. The app was deployed and fully functional. The code was well documented and structured with supporting documentation for how to deploy and test the app.

He was able to do it all in under 6 hours by orchestrating agents to understand the requirements, write code, documentation and tests, debug hidden issues placed there on-purpose, and design the core algorithm.

Obviously, if one can build this much in just 6 hours, imagine what the same person can do in a week.

But the bar keeps rising because AI gets better.

How to Position Yourself

For those navigating this new job market:

  • Make your work visible. Show clear evidence of what you have shipped (open-source, papers, blogs, products you shipped)
  • Treat agents as first-class contributors, not just tools
  • Invest heavily in evals, benchmarks, and automated checks
  • Learn to push repeatable research and engineering work into agents and pipelines
  • Design workflows bottom-up so humans focus on judgment, not throughput (example: minor code duplication is better for agents than complex abstractions - optimize for agents, not humans)
  • Use multiple agents to review and validate output. Make sure agents write tests to validate their own output

But the most important skill - is the ability to clearly communicate your thoughts and the ability to write them down in an articulate cohesive way.

What Not to Do

  • Optimize for manual heroics
  • Confuse “more code” with progress
  • Ship without clear eval signals
  • Assume today’s workflows survive another model generation

Coding itself is effectively solved at this point. What isn’t solved is context, intent, evaluation, and trust. That’s where your leverage is.

A Closing Thought

What’s genuinely lost is coding as craft. But perhaps that was never the hardest part. The hardest part has always been figuring out what to build.

With that, this is by far the most exciting time to be an engineer. With enough curiosity, agency, and good judgment, anyone can create pretty much anything.

Strong engineering fundamentals and passion for creation will get you very far. The teams that win are the ones that lean into this early, redesign their workflows honestly, and stay curious instead of defensive.

Making work cheaper doesn’t mean less work gets done. It means entire categories of work suddenly become viable. Our agency matters most. What we do with this new capacity matters the most now.