Depthfirst raises $80m to scale security-specific AI models beyond smart contract defense

depthfirst has raised $80 million to expand its AI-native security platform and train specialized models. Find out what the move means for cyber defense.
Representative image of AI-driven cybersecurity and venture funding, illustrating the depthfirst $80 million Series B story and the rise of domain-specific security models in software defense.
Representative image of AI-driven cybersecurity and venture funding, illustrating the depthfirst $80 million Series B story and the rise of domain-specific security models in software defense.

depthfirst has raised $80 million in Series B funding less than 90 days after emerging from stealth with a $40 million Series A, bringing total capital raised to $120 million. The San Francisco-based applied AI lab said the new round was led by Meritech Capital, with participation from Forerunner Ventures, The House Fund, and existing investors including Accel, Box Group, Liquid 2 Ventures, Alt Capital, and Mantis VC. The company also used the announcement to introduce dfs-mini1, its first in-house security model, initially focused on smart contract vulnerability detection and verification. The combined message is not subtle: depthfirst is trying to position itself not merely as another AI security tool vendor, but as a company building its own domain-specialized intelligence layer for a security market that increasingly suspects general-purpose models may not be enough.

Why does depthfirst’s $80 million Series B matter for the next phase of AI-native software security?

The funding round matters because it highlights a more serious shift underway in cybersecurity venture thinking. For the past year, investors have poured money into AI-enabled security categories ranging from copilots to automated triage to SOC workflow enhancement. But the more durable question has been whether AI security companies are simply wrapping frontier models with better interfaces or actually building differentiated systems that can outperform on security-specific tasks. depthfirst is clearly pitching itself in the second camp.

That distinction matters because software security is an ugly, high-noise, high-context problem. Traditional scanners generate far too many alerts, and human teams often drown in false positives while real exploitable weaknesses hide in business logic, infrastructure drift, and application complexity. A company that can materially reduce noise while generating fixes developers actually merge does not just improve workflow efficiency. It changes the economics of modern application security programs.

depthfirst’s pitch, as described in its own materials, is that its platform reasons across code, infrastructure, and business logic to find vulnerabilities and push precise fixes inside developer workflows. That is a much more ambitious claim than “we summarize alerts with AI.” It implies an attempt to compress detection, prioritization, and remediation into a more continuous security system. If that approach works at enterprise scale, it could threaten legacy application security vendors whose products still rely too heavily on static signatures, brittle rules, or alert-heavy pipelines.

Representative image of AI-driven cybersecurity and venture funding, illustrating the depthfirst $80 million Series B story and the rise of domain-specific security models in software defense.
Representative image of AI-driven cybersecurity and venture funding, illustrating the depthfirst $80 million Series B story and the rise of domain-specific security models in software defense.

Why is depthfirst building its own security model instead of relying only on frontier AI models?

The launch of dfs-mini1 is arguably the more important part of the announcement. Capital raises make noise. Model strategy determines whether that noise ages well.

depthfirst said dfs-mini1 was built on an open-source model and then post-trained through reinforcement learning in security-specific environments, with an initial focus on cryptocurrency smart contracts. The strategic message is that domain control matters. In other words, the company appears to believe that owning at least part of the training and evaluation stack will matter more over time than simply renting intelligence from a large foundation model provider.

See also  CoreWeave lands Anthropic after Meta expansion, and suddenly the AI cloud race looks tighter

That is a sensible bet. Security is one of the few enterprise categories where generic fluency can be almost useless without domain precision. A model that writes clean prose or explains APIs elegantly may still fail on exploit logic, verification depth, code-path reasoning, or remediation reliability. Security buyers do not pay for eloquence. They pay to avoid breaches, reduce analyst drag, and get fixes into production without causing chaos.

There is also a cost angle buried in the release that deserves attention. depthfirst claims dfs-mini1 outperformed frontier models in its initial application while running at 10x to 30x lower cost. If that performance-cost ratio holds in real-world settings, it would give the company more than a research talking point. It would create room for better unit economics, wider deployment inside customer environments, and possibly more aggressive pricing against slower, more expensive competitors. In AI infrastructure, cheaper intelligence is not just margin candy. It can be a product strategy.

How important is smart contract security as a wedge for broader enterprise cybersecurity expansion?

At first glance, focusing the first in-house model on smart contracts may look niche. It is not. It is a clever proving ground.

Smart contract security is brutally unforgiving. Code is transparent, exploits can be immediate, and financial losses can be spectacular. That makes it a useful environment for training models on vulnerability detection and patching tasks where correctness matters more than style. It also provides a measurable, adversarial domain in which specialized models can demonstrate real capability before expanding into adjacent security workloads.

The more interesting claim from depthfirst is not that dfs-mini1 works on smart contracts. It is that early internal evaluations suggest the training approach may generalize beyond smart contracts into other security tasks. If that proves true, then smart contracts are not the destination. They are the gym.

That generalization claim should still be treated carefully. Security is not one domain but many. Web application flaws, cloud misconfigurations, identity abuse, business logic bugs, secrets exposure, CI/CD compromise, and infrastructure drift each behave differently. A strong smart contract model does not automatically become a strong enterprise security model. Still, as a beachhead strategy, the choice makes sense. Start where the signal is harsh, the benchmark is tangible, and the buyer pain is expensive.

What does the EVMbench comparison really prove, and what should enterprise buyers question next?

This is where the story gets spicy in a useful way.

depthfirst said dfs-mini1 was evaluated on OpenAI’s EVMbench, a benchmark introduced with Paradigm in February 2026 to test whether AI agents can detect, patch, and exploit smart contract vulnerabilities. That gives the company a timely and recognizable way to frame performance. But benchmarks in security are not neutral holy texts. They are living arguments about what counts as skill.

See also  Why Ambarella’s 49% revenue surge in Q2 has investors betting big on edge AI adoption

OpenZeppelin publicly criticized aspects of EVMbench in March, arguing that parts of the dataset and vulnerability labeling contain methodological flaws and contamination risks. That does not invalidate every result tied to the benchmark, but it does mean buyers and investors should not treat benchmark wins as the final word on production-grade security performance. In security, the graveyard is full of products that benchmarked beautifully and operationalized awkwardly.

So the real question is not whether dfs-mini1 looks good on EVMbench. The real question is whether depthfirst can show repeatable enterprise outcomes across broader software environments. The company’s early customer metrics are encouraging, especially the claim that 80% of its fix recommendations are accepted and merged by developers. That is one of the more commercially meaningful metrics in the release because it measures behavior, not just model output. Still, enterprise buyers will want to know acceptance rates by environment, by issue severity, by codebase complexity, and by post-merge defect rates. Those numbers separate promising demos from durable platforms.

How could depthfirst challenge legacy application security vendors and newer AI security startups?

If depthfirst executes well, it could create pressure from two directions at once.

First, it could unsettle legacy application security vendors whose tools still generate too much alert noise and rely on security teams to do the painful reconciliation work. A platform that finds more relevant issues and gives developers ready-to-merge fixes compresses time-to-remediation and weakens the case for buying separate tools across static analysis, prioritization, and remediation assistance.

Second, it could also pressure the newer crop of AI security startups that rely heavily on third-party frontier models but lack differentiated training data, evaluation frameworks, or product workflows. Once customers start asking who owns the training loop, who measures model drift, and who can optimize for security-specific objectives rather than generic reasoning, the market may become less friendly to thin orchestration layers.

That is likely why the release leans so hard into specialized data, domain-specific evaluation, and post-training expertise. depthfirst is trying to signal that its moat will not come from AI branding. It will come from owning the relationship between domain data, model tuning, workflow integration, and measurable outcomes. In a market where everyone suddenly says “agentic,” that is a healthier place to compete.

What execution risks could slow depthfirst even after this funding round and product momentum?

The biggest risk is scale discipline. Security buyers are notoriously skeptical, procurement cycles are slow, and production trust is hard to earn. Raising $120 million quickly is useful, but it also creates pressure to expand fast, sell broadly, and prove that the product works across varied environments rather than in a narrow early-adopter band.

See also  Wipro opens multi-million-dollar Americas headquarters in East Brunswick

Another risk is model credibility. By choosing to build security-specific models, depthfirst is taking on a more ambitious technical burden than companies that merely integrate external APIs. That creates opportunity, but it also raises the bar. The company now has to show it can continually improve its model stack, defend its evaluation methods, and keep pace as frontier model providers themselves become more security-capable.

There is also category risk. Security teams do not rip out incumbent tooling lightly, especially when regulators, auditors, and internal governance functions still expect explainability and process control. An AI-native platform may be loved by developers and still move slowly in regulated enterprises if proof, auditability, or policy alignment feels incomplete.

Even so, the funding gives depthfirst room to absorb those challenges. The company said the new capital will be used to train additional security models, expand its AI research team, and scale enterprise adoption. That suggests the next phase will test whether it can move from sharp wedge strategy to broader platform relevance without diluting what made the wedge interesting in the first place.

What do the key takeaways from depthfirst’s Series B funding mean for AI security competition in 2026?

  • depthfirst is trying to move the AI security conversation away from copilots and toward ownership of security-specific model performance.
  • The launch of dfs-mini1 suggests the company sees model specialization, not just workflow design, as its long-term moat.
  • Smart contract security is being used as a high-signal proving ground rather than a narrow end market.
  • The company’s claim of lower-cost performance could matter as much as raw benchmark results if it holds up in production.
  • Fix acceptance and merge rates are more commercially relevant than benchmark scores because they indicate workflow adoption.
  • Legacy application security vendors may face pressure if AI-native platforms start collapsing detection and remediation into one loop.
  • Newer AI security startups that depend entirely on general-purpose models could also be exposed if customers begin demanding domain-trained systems.
  • EVMbench visibility helps depthfirst market its model, but criticism of the benchmark means technical buyers will demand stronger field evidence.
  • The funding round gives depthfirst the balance-sheet room to invest in research talent, enterprise sales, and broader domain coverage.
  • The bigger question for 2026 is whether specialized security models become a category norm or remain a wedge used by a few technically ambitious startups.

Discover more from Business-News-Today.com

Subscribe to get the latest posts sent to your email.

Total
0
Shares
Related Posts