Top Tech News Today, February 24, 2026
It’s Tuesday, February 24, 2026, and here are the top tech stories making waves today. AI competition is heating up on multiple fronts. In the past 24 hours, the story has shifted from who has the smartest models to who controls the chips, the infrastructure, and the guardrails around increasingly autonomous systems. Big Tech is preparing to pour hundreds of billions into AI buildouts, while new allegations around model training, distillation campaigns, and agent-driven risks highlight how quickly the stakes are rising.
At the same time, the security cracks are getting harder to ignore. From AI-assisted cyberattacks and social engineering breaches to IoT exposures inside people’s homes, the threat surface is expanding alongside innovation. Layer in growing government pressure on AI vendors, a fresh robotaxi proving ground in London, and aggressive AI adoption pushes in China, and one thing is clear: the next phase of the tech cycle is being shaped in real time.
Here are the 15 biggest tech news stories you need to know today.
Technology News Today
Big Tech’s 2026 AI infrastructure bill could hit $650B as capex race accelerates
Alphabet, Amazon, Meta, and Microsoft are on track to collectively invest about $650 billion this year to scale AI-related infrastructure, according to analysis from Bridgewater Associates. The figure implies a sharp step-up from last year’s estimated $410 billion, as hyperscalers scramble to close a widening gap between compute demand and supply.
Bridgewater warned clients that the AI boom is entering a “more dangerous phase,” where the sheer volume of physical buildout (chips, data centers, power) makes the downside larger if expectations slip or capital tightens. The note also points to financial behavior that’s starting to look different from prior cycles: companies curbing share buybacks more aggressively to preserve cash for capex, and a growing reliance on outside capital to keep the buildout moving.
The broader signal for startups and the ecosystem: the platform layer is doubling down on infrastructure first, and the pressure is shifting to the application layer to prove it can generate profit pools big enough to justify the spend. Bridgewater specifically flags that leaders like OpenAI and Anthropic will need major product breakthroughs to keep backing for future mega-fundraises and IPO paths, while the spending wave may also push up equipment and electricity costs in some regions.
Why It Matters: The AI economy is increasingly being shaped by who can finance and power compute at scale, not just who has the best model.
Source: Reuters.
U.S. official says China’s DeepSeek trained its next AI model on Nvidia Blackwell despite export controls
A senior Trump administration official told Reuters that Chinese startup DeepSeek trained its latest model—expected as soon as next week—using Nvidia’s Blackwell, the company’s most advanced AI chip. If accurate, that would raise fresh questions about enforcement and the effectiveness of U.S. export controls designed to restrict China’s access to top-tier AI hardware.
The allegation lands at a sensitive moment: Washington is already wrestling with how quickly frontier chips diffuse through global supply chains, including gray markets and intermediary networks. Even when direct sales are restricted, high-end GPUs can surface through third parties, offshore entities, or cloud pathways, complicating compliance and monitoring. The reported use of Blackwell for training would intensify pressure on regulators to tighten controls—and on Nvidia and its partners to strengthen know-your-customer and distribution oversight.
For the startup ecosystem, the story underscores a structural reality: compute advantage is now geopolitical. If Chinese labs can access top-class accelerators despite bans, competitive timelines compress, open-source and fast-follow model releases accelerate, and AI safety and IP disputes become more frequent. It also raises the stakes for U.S. and allied investment in domestic manufacturing, secure supply chains, and auditable compute provenance.
Why It Matters: Export controls are only as strong as the supply-chain visibility behind them—and AI competition is testing that limit.
Source: Reuters.
Anthropic says Chinese AI firms ran “industrial-scale” Claude distillation campaigns using 24,000 fake accounts
Anthropic says DeepSeek and two other Chinese AI companies abused Claude to improve their own models, describing “industrial-scale campaigns” involving roughly 24,000 fraudulent accounts and more than 16 million exchanges with Claude. The company frames the activity as a large-scale distillation effort—using outputs from one model to train or tune another—at a scope that’s hard to dismiss as a few rogue users.
If this pattern holds, it’s a preview of the new frontline in AI competition: not just model quality or training data, but operational security and platform abuse controls. The incentive is straightforward. Distillation can compress development timelines and costs, especially for labs trying to match proprietary systems. But it also pulls AI vendors deeper into policing: detecting synthetic account farms, rate-limiting at scale, and distinguishing legitimate enterprise usage from extraction.
For customers, this kind of dispute matters because it shapes access and friction. Vendors may respond with tighter gating, more aggressive monitoring, and stricter contractual and technical enforcement. That can slow experimentation for smaller teams while favoring large buyers who can negotiate allowances. It also adds urgency to watermarking, output provenance, and verification methods—tools that could become standard in enterprise AI, the way anti-fraud tooling became standard in payments.
Why It Matters: Model “theft-by-usage” is becoming a core business risk in AI, and it will change how access, pricing, and safeguards work.
Source: TechStartups (via Anthropic).
Pentagon pressures Anthropic over Claude safeguards as military reliance on the model grows
Axios reports Defense Secretary Pete Hegseth is set to meet Anthropic CEO Dario Amodei in what a senior defense official described as a blunt, high-stakes conversation. The core issue: the Pentagon wants broader latitude to use Claude in classified environments, while Anthropic has resisted lifting safeguards entirely. Axios notes Claude is described as the only AI model available in the military’s classified systems and seen as highly capable for defense and intelligence workflows—raising both leverage and dependency on Anthropic.
This is the collision point between commercial AI policy and national security demands. Defense customers often want fewer constraints, broader permissions, and the ability to fine-tune or deploy in ways that commercial vendors view as risk amplifiers. But vendors are also managing reputational exposure, downstream misuse concerns, and the precedent that any “special exceptions” create for other powerful customers.
For startups building AI for government, the lesson is sobering: compliance isn’t just paperwork—it’s product architecture. If a model is used in sensitive systems, safeguards become part of the negotiation, and the vendor can get squeezed from both sides: pressured by governments for capability and by the market for safety posture. This also signals an expanding role for “policy-as-a-feature” (audit trails, configurable constraints, red-teaming hooks) that can make or break major contracts.
Why It Matters: Defense adoption is pushing AI vendors into decisions that reshape how safeguards, customization, and liability are handled at scale.
Source: Axios.
Axios warns of an “AI agent population explosion” loading the internet with autonomous software workers
Axios argues that the internet is entering a phase in which AI-powered software agents—systems that can take actions within digital tools—are multiplying rapidly, creating a new kind of “population explosion.” Rather than being passive chat interfaces, these agents can click, execute workflows, call APIs, and coordinate with other agents across systems, which changes the baseline assumptions of online activity and identity.
The immediate upside is productivity: agents can handle routine operations, monitoring, customer tasks, and internal coordination at low marginal cost. But the risk surface expands just as quickly. When an agent has permissions—financial access, tokens, admin privileges—the difference between “helpful automation” and “automated incident” can be a single misconfiguration. And because agents operate at machine speed, mistakes propagate faster than traditional human-driven errors.
For the broader ecosystem, this raises second-order effects: platforms will need better agent authentication, agent-to-agent trust frameworks, and new anti-abuse defenses as bots become more capable and more common. Expect rising demand for “agent governance” startups: permissioning layers, policy engines, audit trails, sandboxed execution, and identity verification designed for non-human actors.
The near-term winners may be vendors who make agents safer to deploy in enterprises—because the appetite for automation is real, but so is the fear of letting autonomous systems roam.
Why It Matters: If agents become the dominant force on the internet, security, identity, and governance models must be rebuilt for a bot-heavy internet.
Source: Axios.
AI-augmented cybercriminals compromised 600+ FortiGate firewalls across 55 countries
A new incident report cited by The Register says cybercriminals using off-the-shelf generative AI tools compromised more than 600 internet-exposed FortiGate firewalls across 55 countries in just over a month. The activity is attributed to a Russian-speaking cybercrime group and shows how AI can lower the barrier for exploitation, automation, and operational scale—especially when targeting widely deployed edge devices with weak credentials or exposed management interfaces.
The report matters because it reflects a shift from “AI helps write phishing emails” to “AI helps run campaigns.” GenAI tools can assist with scripting, payload iteration, operational troubleshooting, and rapid adaptation when defenders change controls. That makes mid-tier actors more dangerous—less reliant on elite exploit development and more able to industrialize proven techniques.
For enterprises and startups, the takeaway is uncomfortable but actionable: perimeter devices remain a high-leverage target, and “basic hygiene” (patching, MFA, eliminating exposed admin ports, rotating credentials) is still the best defense against AI-amplified attacks.
Expect more pressure on vendors to ship safer defaults and clearer telemetry, and more demand for services that can continuously validate edge posture. AI is accelerating offense, but mostly by exploiting long-known weak points.
Why It Matters: AI doesn’t need new zero-days to scale damage—just unpatched, exposed infrastructure and automation.
Source: The Register.
IBM stock dive shows how AI “COBOL refactoring” narratives can move markets overnight
IBM shares fell by more than 10 percent on Monday after Anthropic highlighted how its Claude Code tools can accelerate refactoring applications written in COBOL—a language still embedded in critical government, airline, and financial systems. Anthropic argued that COBOL developers are scarce and migration is risky and expensive, positioning AI as a shortcut for assessment, documentation, and rewriting.
The striking part: IBM has been making a similar pitch for years, including efforts to use AI to translate COBOL to Java and the launch of products like “watsonx Code Assistant for Z.” But markets often react less to the idea itself and more to a fresh narrative that reframes incumbent advantage as vulnerability. When investors believe AI can collapse switching costs or rewrite legacy faster than incumbents can monetize it, the valuation impact can be immediate.
For founders, the event is a real-time case study in distribution and perception. The “AI can rewrite legacy systems” thesis, if credible, threatens entire categories of maintenance-heavy enterprise software and services—while also creating opportunities for specialized migration tooling, testing frameworks, and risk controls. The hard part is execution: rewriting critical systems is not just code generation; it’s validation, compliance, and operational continuity. The winners will be those who can prove reliability in high-stakes environments, not just those who can generate code quickly.
Why It Matters: AI narratives are now a market force—and they can reprice legacy-tech business models faster than product realities change.
Source: TechStartups (via Anthropic and Google).
Optimizely confirms customer data breach after “vishing” compromise
Optimizely has notified customers about a data breach after attackers gained access to some systems through a voice phishing (vishing) attack, according to BleepingComputer. The incident highlights a recurring pattern: threat actors increasingly target people, not just software, using social engineering to bypass technical controls and obtain credentials or internal access through convincing, real-time manipulation.
For companies in ad tech and digital experience platforms, the blast radius can be meaningful. These systems often touch customer identity data, marketing pipelines, analytics, and campaign operations—making them attractive footholds for lateral movement or data extraction. Even when the number of impacted customers isn’t publicly specified, the reputational costs can be steep because marketing and experimentation stacks are deeply interconnected with customer-facing systems.
The bigger lesson for startups: security maturity isn’t optional just because you’re “not a bank.” Social engineering attacks scale well and exploit human workflows such as vendor support, IT resets, and emergency approvals. Expect increased demand for identity hardening (phishing-resistant MFA, privileged access controls, out-of-band verification) and for internal processes that treat phone-based requests as hostile by default. In 2026, many breaches look less like “a hacker broke our encryption” and more like “someone got persuaded to open the door.”
Why It Matters: Social engineering is becoming the most reliable breach vector—and it targets operational processes every company has.
Source: BleepingComputer.
Security researchers flag major security gaps in Android mental health apps with 14.7M installs
BleepingComputer reports that several mental health apps on Google Play—totaling 14.7 million installs—contain vulnerabilities that could expose sensitive medical information. The report underscores a persistent problem in the mobile ecosystem: highly personal categories (health, therapy, mood tracking) often collect intimate data while shipping with weak protections, insecure storage, or flawed API handling.
This is especially consequential because mental health data can be among the most damaging to leak. Beyond identity theft, disclosure can lead to stigma, employment consequences, personal safety risks, or targeted manipulation. For users, it’s hard to evaluate security from an app-store listing; for regulators, it’s difficult to police at scale without clearer standards and enforcement mechanisms.
For startups and builders in digital health, the bar is rising. “Trust” in health tech isn’t branding—it’s an engineering discipline: encryption, secure authentication flows, minimized data collection, and auditable access controls. The market may also tilt toward platforms that can prove compliance and security rigor through independent testing. And for the broader tech ecosystem, stories like this are why privacy-by-design is increasingly seen as a competitive advantage, not a cost center—especially as AI-driven personalization pushes apps to collect even more sensitive inputs.
Why It Matters: Health apps are becoming data vaults, and weak security turns personal well-being into exploitable material.
Source: BleepingComputer.
A tinkerer accidentally accessed 6,700+ robot vacuums, exposing floor plans and live feeds
Tom’s Hardware reports a security flaw that exposed thousands of DJI Romo robot vacuums after a user built an app to control their own device with a PlayStation controller. Instead of isolating access to a single vacuum, the workflow reportedly enabled access to around 6,700 devices worldwide, including floor plans, live camera and microphone feeds, and remote controls. The discoverer says they didn’t “hack” DJI systems—they used a token from their own device, and the system effectively handed over broader access.
DJI reportedly addressed the issue with updates that require no user action, but the incident is a textbook IoT risk: home devices often maintain constant cloud connections, store sensitive data, and depend on brittle authentication and authorization designs. A vacuum map is not trivial; it’s a blueprint of a home. Add cameras and microphones, and a household gadget becomes a surveillance endpoint if identity controls fail.
For consumer tech, this lands as a product strategy warning. “Smart” features ship faster than security models mature, and the pressure to innovate (remote viewing, automated mapping, cloud syncing) can outpace threat modeling. Expect rising scrutiny from consumers and regulators, as well as greater demand for local-only modes, transparent telemetry, and hardened device identity. The next generation of hardware winners may be the ones who treat security and privacy as core specs, not afterthoughts.
Why It Matters: IoT failures don’t just leak passwords—they can leak the physical layout and live signals of people’s homes.
Source: Tom’s Hardware.
OpenAI pushes deeper into consulting partnerships as enterprises demand “AI outcomes,” not pilots
Semafor reports that OpenAI is deepening ties with consulting firms—an acknowledgment that enterprise AI adoption is being shaped as much by implementation capacity as by model quality. Consulting firms have invested heavily to position themselves as AI guides, but clients have complained that many consultants lack hands-on AI expertise. The push now is toward proving measurable value that clients can’t replicate with off-the-shelf tools alone.
This matters because it reshapes the go-to-market path for AI startups. The early wave of enterprise AI was dominated by pilots and proofs-of-concept. The next wave is procurement: integration into workflows, security approvals, change management, and operational ownership. Consulting channels can accelerate distribution, but they also influence product requirements—such as standardized deployments, governance tooling, auditability, and clear ROI narratives.
For the ecosystem, the implication is a consolidation of “enterprise AI stack” power among vendors that are easiest to implement and govern. Startups that can plug into consulting-led rollouts—through APIs, compliance readiness, and repeatable playbooks—may get faster adoption. Meanwhile, firms that can’t demonstrate value beyond what a client can generate themselves with general-purpose AI tools will face margin pressure. This is the maturity phase: fewer demos, more operational results.
Why It Matters: Enterprise AI is moving from experimentation to execution, and consulting alliances are becoming a key distribution channel.
Source: Semafor.
China’s AI app engagement war heats up with giveaways, token surges, and agent-driven commerce
Semafor reports that Chinese tech firms leaned into aggressive promotions around Lunar New Year—giving away cars, vouchers, and cash—to drive engagement with AI apps. ByteDance said it logged 1.9 billion interactions during a widely watched gala and, at one point, processed 63.3 billion tokens in a single minute; Alibaba said nearly 200 million orders were placed through its AI agent interface, including 55 million cups of milk tea.
This isn’t just marketing theater. It signals how quickly consumer AI behavior is being trained at a population scale: users are being nudged to treat AI as an interface for search, shopping, and service transactions. If AI agents become a default commerce layer, the winners are platforms that own distribution and can subsidize adoption. The losers may include smaller apps and merchants that become dependent on agent-driven discovery rules they don’t control.
For global startups, the competitive warning is clear: China is running massive real-world experiments in agent-led engagement, where the dataset isn’t lab prompts—it’s live consumer intent. That can accelerate product iteration and model tuning. It also raises governance questions, from manipulation risks to the transparency of agent recommendations. The next “app store” battle may be fought inside agent interfaces, not icon grids.
Why It Matters: China’s AI platforms are training consumers to transact through agents—potentially redefining how commerce discovery works globally.
Source: Semafor.
“Bot-to-bot” supply-chain attacks emerge as malicious AI skills target crypto wallets and agent trust
SecurityWeek reports on a new class of supply-chain attack centered on AI agents and plugin marketplaces. A security firm analyzing thousands of “Claude Skills” found overtly malicious and high-risk plugins, including an active agent-to-agent attack chain promoted through an agent social platform. The reported scam involved a plugin that instructed agents to store Solana wallet private keys in plaintext, buy worthless tokens, and route payments through attacker-controlled infrastructure—spreading laterally through automated workflows with minimal human interaction.
The crucial shift is the target: not just people, but algorithms and autonomous systems. If agents install skills based on reputation cues, rankings, or social signals, attackers can run influence campaigns tailored to how agents “decide,” then weaponize trust relationships between automated workers. That’s supply-chain poisoning plus social engineering—optimized for machine behavior.
For builders, this is the early warning shot for the agent economy. Marketplaces for skills, tools, and plugins are forming quickly, but governance frameworks are lagging. Expect pressure for code signing, provenance scoring, sandboxed permissions, and continuous behavior monitoring of third-party skills. If agent ecosystems scale without strong verification, the result could be a repeat of the browser-extension malware era—only faster, more automated, and directly connected to financial systems.
Why It Matters: As agents start installing tools for us, the supply chain becomes autonomous—and attackers will exploit that autonomy.
Source: SecurityWeek.
Americans are reportedly dismantling Flock license-plate cameras as surveillance backlash grows
TechCrunch reports that people across the U.S. are dismantling and destroying Flock surveillance cameras, amid rising public anger that license plate readers help immigration authorities and deportations. The report points to a widening clash between the deployment of public safety technology and public consent—especially when data sharing expands beyond the original use cases communities believed they were signing up for.
The underlying issue isn’t just vandalism—it’s legitimacy. When surveillance infrastructure is deployed at scale, governance becomes the product. Communities want clarity on retention, access, auditing, and inter-agency sharing. If residents believe the system is being repurposed for broader enforcement or political objectives, the social license collapses—and the physical infrastructure becomes a target.
For startups selling into government and public safety, this is a market signal: procurement wins aren’t the end of the story. The durability of contracts increasingly depends on transparency, oversight tools, and credible privacy safeguards that can withstand political shifts. Expect more public records scrutiny, legal challenges, and demands for opt-in governance. This is also a reminder that “data exhaust” from civic tech can become combustible when it intersects with civil liberties.
Why It Matters: Surveillance tech is hitting a public trust wall—and that trust now determines whether deployments can survive.
Source: TechCrunch.
London robotaxi trials set the stage for a high-stakes test of autonomy in one of the world’s toughest cities
AP reports London is becoming a major test bed for robotaxis ahead of U.K. government trials launching in the spring. British startup Wayve is preparing for the pilot while major players, including Alphabet’s Waymo and China’s Baidu, are also planning to participate—turning London into a new arena in the global robotaxi race.
London is a uniquely hard environment: dense streets, complex road layouts, and unpredictable pedestrian behavior. AP notes a local dynamic that matters for autonomy—jaywalking isn’t an offense in Britain—meaning vehicles may face constant pedestrian crossings that don’t follow rigid patterns. This is exactly the kind of “edge-case as daily reality” that can expose weaknesses in perception, planning, and safety validation.
For the broader ecosystem, the London trials will be watched as a regulatory and business model experiment: how liability is handled, what safety standards are required, and how ride-hailing partnerships shape rollout. Wayve’s work with Uber and Baidu’s reported tie-ups with both Uber and Lyft signal that autonomy is converging with platform distribution. If London works, it becomes a blueprint for other complex cities in Europe and beyond. If it doesn’t, it strengthens the argument that autonomy will remain geographically constrained for longer than optimists expect.
Why It Matters: London is a real-world stress test for robotaxis—and the results could shape global deployment timelines and regulation.
Source: AP News.
That’s your quick tech briefing for today. Follow us on X @TheTechStartups for more real-time updates.

