In our first editorial, we examined the broad skills landscape across 23,000 listings and found that System Design and Technical Leadership — not AI — topped the charts. Two weeks later, our dataset has grown to 30,206 tracked listings across 41 companies — 28,771 currently open — with closed-job detection now tracking what disappears from career pages.
This time we focus on a narrower question: how are AI coding tools — Claude Code, GitHub Copilot, agentic frameworks — actually showing up in hiring data? The commentary around these tools has been breathless. The data tells a more grounded story.
Before diving into AI-specific signals, it helps to understand what the official labor data says about the tech job market right now.
The BLS Employment Situation released today (February 11, 2026) shows nonfarm payrolls grew by 130,000 in January — beating the 55,000 consensus — with the unemployment rate at 4.3%. Professional and business services added 34,000 jobs. But the headline masked a sobering revision: the BLS slashed 2025 job growth from 584,000 to just 181,000, making last year one of the weakest for employment in over a decade.
The tech picture is more nuanced. According to CompTIA's Tech Jobs Report, tech industry employment fell by 20,155 jobs in January (mostly telecom losses), yet employer job postings for tech positions surged 13% from December to 220,420 new listings, with over 465,000 active postings. The tech unemployment rate sits at 3.6% — well below the national 4.3%. By February, tech companies added an estimated 2,340 workers, with 185,000 new postings and 436,000+ active listings.
Meanwhile, the JOLTS report for December 2025 showed total job openings fell to 6.5 million — the lowest since December 2017. Professional and business services openings dropped 21.8% (284,000 positions) over the quarter. The Indeed Hiring Lab describes the market as a "low-hire, low-fire dynamic," with the quits rate stuck at 2% (below the 2019 average), suggesting workers feel less confident about switching.
The layoff numbers tell a parallel story. Crunchbase tracks approximately 127,000 tech workers laid off in 2025 (up 32% from 95,667 in 2024), and the 2026 pace is running higher still. January 2026 saw 108,435 total layoffs economy-wide — the highest January since 2009.
The bottom line: the tech labor market is cooling but not frozen. Demand for tech workers remains above the broader economy (3.6% vs 4.3% unemployment), and job postings are rising — but the jobs being created are shifting in character. That shift is what our Skilark data illuminates.
The clearest signal of the AI coding tool wave is linguistic. Across our 28,771 open listings, 107 jobs now carry "agentic" in the title. That is 0.37% of all roles — tiny in absolute terms, but notable because this word barely existed in job postings six months ago.
The distribution is revealing:
| Company | Agentic Titles |
|---|---|
| Capital One | 29 |
| NVIDIA | 14 |
| Salesforce | 11 |
| eBay | 7 |
| Adobe | 6 |
| Apple | 6 |
| Databricks | 6 |
| Microsoft | 6 |
Capital One leads with 29 agentic roles — more than any pure-play AI company. Titles like "Distinguished AI Engineer (Agentic AI Platform)" and "Applied Researcher (LLM Core and Agentic AI)" suggest the bank is building dedicated infrastructure for autonomous AI systems. Apple, Adobe, and Salesforce each have agentic roles tied to specific product surfaces: Siri, Photoshop, and enterprise workflows respectively.
This pattern matters. "Agentic" is not confined to AI labs. It is showing up at financial institutions, enterprise software companies, and consumer hardware makers. The technology is being productized, not just researched.
Microsoft has 69 open roles with "Copilot" in the title. These span engineering, product management, data science, marketing, and solution architecture — from "Member of Technical Staff, Principal Full Stack Engineer - Copilot Applications" to "Community Marketing Manager - Copilot."
This is not a small internal project. It is a product line with dedicated headcount across the entire organizational stack. When a single company creates 69 named positions around an AI coding assistant, it signals that AI-augmented development has moved from experiment to core business.
Beyond titles, "Copilot" appears in 964 listing descriptions across our full dataset. For context, that is more than the combined mentions of Kubernetes (as a body keyword) and Docker. The AI coding assistant has become as much a part of the infrastructure vocabulary as containers.
The more interesting story is not the dedicated AI roles — it is how generative AI language is infiltrating job descriptions across categories that have nothing to do with AI research.
We searched for mentions of "generative AI," "agentic," or "LLM" in listing descriptions, broken down by role category:
| Role Category | Listings | AI Mentions | Penetration |
|---|---|---|---|
| Research | 313 | 118 | 37.7% |
| AI/ML | 2,550 | 878 | 34.4% |
| Solutions | 573 | 132 | 23.0% |
| Product | 1,694 | 215 | 12.7% |
| Software Engineering | 5,783 | 626 | 10.8% |
| Program Management | 747 | 72 | 9.6% |
| Security | 1,073 | 88 | 8.2% |
| Platform | 765 | 58 | 7.6% |
| Data Engineering | 668 | 49 | 7.3% |
Research (37.7%) and AI/ML (34.4%) roles mentioning these terms is expected. The surprise is Solutions at 23% — these are customer-facing technical roles where "generative AI" now appears as a required competency, not an area of research. Product management sits at 12.7%, and even software engineering — the broadest category at 5,783 roles — shows 10.8% penetration.
Security roles at 8.2% is particularly notable. These are not AI security research positions. They are security engineers whose job descriptions now include defending against or governing AI systems.
When we look at what skills co-occur with LLM-tagged listings, a clear technology stack emerges:
| Skill | Co-occurrence with LLMs |
|---|---|
| Python | 1,242 |
| NLP | 906 |
| System Design | 618 |
| Technical Leadership | 521 |
| SQL | 355 |
| Azure | 350 |
| Distributed Systems | 344 |
| PyTorch | 332 |
| REST APIs | 324 |
| RAG | 276 |
| Transformers | 241 |
Python dominates, appearing in 67% of LLM-related listings. But the second-most common co-skill is not a framework — it is NLP (906), reflecting that LLM work remains grounded in language understanding fundamentals. System Design (618) and Technical Leadership (521) in third and fourth place confirm that LLM roles are senior-weighted. Companies want architects who can build LLM systems, not just prompt them.
RAG (Retrieval-Augmented Generation) at 276 co-occurrences is worth highlighting. This pattern — grounding LLM outputs in retrieved documents — has moved from research technique to production requirement. Microsoft (71 RAG-tagged listings), Capital One (35), and Apple (25) lead adoption.
Here is where we must be honest about the limits of job posting data for understanding AI coding tool adoption.
Claude Code, Cursor, and similar tools are changing how individual developers work. The reports of productivity gains are credible — senior engineers describe delegating complex multi-step implementation tasks to agentic coding assistants and getting back code that requires review rather than rewriting. This is a real shift in developer workflow.
But job postings are a lagging indicator of tool adoption. Companies do not post "Must know Claude Code" in their requirements the way they list Python or Kubernetes. AI coding tools are adopted bottom-up by individual developers, often before procurement or HR knows about them. Our data shows 245 mentions of "Claude" in listing bodies (mostly at Anthropic itself) and 101 mentions of "GitHub Copilot" — but these dramatically undercount actual usage.
What we can see is the downstream effect. The skills that AI coding tools amplify — System Design, Technical Leadership, architectural thinking — remain the top-demanded skills in our dataset. The skills that these tools partially automate — writing boilerplate, test generation, routine refactoring — do not appear as standalone requirements. No one lists "can write unit tests" as a top skill when the coding assistant handles it.
This may help explain the seniority skew we reported in our first editorial. With AI coding tools handling implementation mechanics, the premium shifts to the judgment layer: knowing what to build, how to architect it, and when the AI-generated code is wrong. Our data shows mid-level (47.3%) and senior (29.1%) roles dominating, with junior positions at just 6.2%. AI coding tools did not cause this skew, but they may be accelerating it.
A widely circulated report this month declared a "SaaSpocalypse" — the collapse of SaaS companies under the weight of agentic AI that can replace entire software categories. The narrative points to stock declines at Salesforce, Adobe, and ServiceNow as evidence.
Our data tells a different story. Every company in our dataset that supposedly faces existential threat from AI agents is actively hiring at scale:
| Company | Open Listings | Closed |
|---|---|---|
| Salesforce | 1,176 | 138 |
| Adobe | 1,145 | 51 |
| Microsoft | 4,026 | 218 |
Salesforce has a 10.5% close rate — the highest in our sample — but still has nearly 1,200 open roles, many of which explicitly reference agentic AI and generative AI capabilities. Adobe is hiring for "Agentic Systems" roles within Photoshop. These companies are not being replaced by AI agents. They are hiring to build them.
Stock prices reflect investor sentiment. Job postings reflect operational reality. The two often diverge, and right now the divergence is stark.
The AI coding tool revolution is real, but its impact on the job market is more nuanced than the breathless commentary suggests.
The tools are raising the floor, not lowering the ceiling. AI coding assistants make competent engineers more productive. They do not eliminate the need for engineers who understand system design, can evaluate architectural trade-offs, and know when the generated code is subtly wrong. These skills — which top our demand charts — become more valuable, not less.
"Agentic" is becoming a product category, not just a buzzword. With 107 dedicated agentic roles across 17 companies — from banks to hardware makers — the pattern is clear. Companies are building agentic AI products, and they need engineers who understand the full stack: LLMs, RAG, system design, and the judgment to know when autonomous systems need guardrails.
The tool itself is not the skill. No employer is hiring for "Claude Code proficiency." They are hiring for the outcomes these tools enable: faster iteration, better architecture, higher-quality code review. The engineers who thrive will be those who use AI coding tools to amplify their existing expertise — not those who depend on them to compensate for missing fundamentals.
Our dataset now tracks 30,206 listings across 41 companies, with 1,435 jobs detected as closed. We will continue watching how the agentic shift shows up — and does not show up — in what companies actually hire for.
Methodology: Skilark tracks public job listings from 41 major technology employers. Listings are classified by skills, seniority, role category, and work arrangement. "Closed" listings are jobs that disappeared from a company's career page between consecutive crawls. Statistics reflect data as of February 11, 2026.
Explore the data behind this analysis: