Top Tech News Today, May 12, 2026
AI is now on trial in Silicon Valley courtrooms, powering real-world warfare, and being weaponized by hackers in the first documented zero-day attacks — all while Big Tech races to lock in the next era of infrastructure dominance. It’s Tuesday, May 12, 2026, and here are the top tech stories shaping the global tech and startup ecosystem today.
In the last 24 hours alone, hackers used AI to develop a zero-day exploit, OpenAI launched a $4 billion enterprise deployment company, Nvidia found itself caught in the middle of U.S.-China tensions, and Europe moved closer to cracking down on addictive social media design. Meanwhile, fresh cyberattacks hit the open-source ecosystem, Big Tech companies faced new internal pressure over ethics and layoffs, and startups raised billions to solve the growing AI compute shortage. Here are the 15 biggest global tech stories making waves today.
Below are the top technology news stories you need to know right now.
Technology News Today
OpenAI CEO Sam Altman set to testify in high-stakes Musk lawsuit
OpenAI Chief Executive Sam Altman will take the witness stand Tuesday and Wednesday in a California court as part of Elon Musk’s lawsuit against the company. The case centers on disagreements over OpenAI’s shift from its original nonprofit mission to a more commercial, for-profit model, with Musk alleging breaches of founding agreements. Court documents highlight internal clashes among tech leaders over control of advanced AI development.
The trial underscores deepening rifts in the AI ecosystem, where founder disputes can reshape corporate governance and influence how startups and Big Tech balance innovation with accountability. Legal outcomes could set precedents for future funding rounds and partnerships, especially as investor scrutiny intensifies around AI ethics and direction.
Why It Matters: This courtroom battle between two AI pioneers reveals the growing pains of scaling frontier models and could reshape trust and investment dynamics across the entire tech startup landscape.
Source: Reuters.
Google detects criminal hackers using AI to uncover major software flaws
Google reported the first known instance of criminal actors using AI to discover and weaponize a zero-day vulnerability, and the company successfully blocked the exploit.
The case illustrates how AI is democratizing access to advanced attack techniques, forcing defensive teams to evolve their detection methods. It adds urgency to collaborative efforts between Big Tech, governments, and startups on cybersecurity standards.
Why It Matters: Google’s discovery of AI-powered zero-days accelerates the need for proactive AI defense strategies across the tech ecosystem.
Source: New York Times.
Apple adds end-to-end encryption for RCS messaging in iOS 26.5 beta
Apple released iOS 26.5 with beta support for end-to-end encryption in RCS messaging through supported carriers, with the setting enabled by default. The update follows Apple’s broader adoption of RCS for richer messaging between iPhone and Android users.
The update matters because messaging security is no longer just a consumer feature; it is a platform trust issue. RCS has long promised a better cross-platform messaging experience, but encryption has been a key missing piece. If carrier support expands, Apple’s move could help narrow the privacy gap between iMessage and cross-platform texting.
Why It Matters: Apple’s encrypted RCS support could make cross-platform messaging more secure for millions of users.
Source: 9to5Mac.
OpenAI Unveils Daybreak Security AI to Rival Claude Mythos in Cybersecurity
OpenAI launched Daybreak, a new security-focused AI tool integrating GPT-5.5-Cyber and Codex Security capabilities to help organizations detect and patch vulnerabilities faster. The release directly competes with Anthropic’s Claude Mythos in the emerging AI-driven cybersecurity market.
The tool addresses rising threats where AI itself aids attackers, offering defenders a proactive edge in vulnerability management. It reflects Big Tech’s push into specialized AI infrastructure that supports enterprise and government clients.
Why It Matters: OpenAI’s Daybreak entry marks a critical step in the development of AI-powered defense tools, strengthening cybersecurity infrastructure and influencing how startups build secure applications in an increasingly hostile digital landscape.
Source: The Verge.
GM cuts hundreds of IT workers as it looks for stronger AI skills
General Motors plans to lay off hundreds of salaried IT workers as it cuts costs and seeks employees with stronger skills in emerging technology areas. Reports say the cuts could affect roughly 500 to 600 employees.
The move matters because AI is changing workforce planning inside old-line industrial companies, not just Silicon Valley software firms. Automakers are becoming software, autonomy, battery, data, and AI companies at the same time. That shift is forcing them to reassess which technical skills they need and where legacy IT roles fit into the next operating model.
Why It Matters: GM’s cuts show how AI-related restructuring is moving deeper into traditional industries.
Source: TechStartups via CNBC.
AI supply-chain attack hits npm packages tied to Mistral, UiPath, and TanStack
A fresh software supply chain attack compromised multiple npm packages linked to widely used developer tools, including those tied to Mistral, UiPath, and TanStack. Socket reported that the Mini Shai-Hulud-style campaign targeted developer dependency chains and CI credentials, and TanStack confirmed that package versions were compromised.
The incident matters because modern software is built on thousands of open-source dependencies. A single poisoned package can ripple across startups, enterprise apps, AI tools, and internal systems. For developer teams, this is another reminder that software security is no longer just about protecting production systems; it now starts inside the build pipeline.
Why It Matters: The attack shows that open-source dependency chains remain among the weakest links in modern tech infrastructure.
Source: Socket.
Canvas parent Instructure reaches agreement with hackers after global edtech breach
Instructure, the company behind the Canvas learning management platform, said it reached an agreement with hackers who breached its systems and stole data. The company did not disclose what it provided in return, but the breach disrupted schools and universities worldwide that use Canvas.
The incident is significant because Canvas sits at the center of digital education for thousands of institutions. When an edtech platform of this scale is hit, the impact reaches students, teachers, administrators, and school IT departments. It also raises fresh questions about ransom negotiations, claims of data deletion, and whether hackers can ever be trusted to destroy stolen information.
Why It Matters: The Canvas breach highlights the growing cybersecurity risk around education platforms that hold sensitive student and faculty data.
Source: New York Times.
Nvidia CEO Jensen Huang left off Trump’s China tech delegation
Nvidia CEO Jensen Huang was reportedly not invited to join President Trump’s China trip, even though Huang had expressed willingness to attend. The exclusion comes as Nvidia remains at the center of U.S.-China tensions over AI chips, export controls, and access to advanced compute.
The decision matters because Nvidia is arguably the most important company in the AI infrastructure race. Any signal around its access to China can move markets and shape expectations for chip sales, geopolitical risk, and future export policy. For startups and cloud providers, the broader issue is whether AI compute supply will become even more fragmented along national lines.
Why It Matters: Nvidia’s absence from the China delegation reinforces how AI chips have become a geopolitical asset, not just a commercial product.
Source: Bloomberg.
Microsoft Exec Returns to AWS to Strengthen Reliability for AI Agents
Former Microsoft executive Shawn Bice has rejoined AWS to lead efforts on AI agent reliability and infrastructure, focusing on making autonomous systems more dependable at scale.
The move underscores the intense talent competition between cloud giants as AI agents move from pilots to production, accelerating innovation in reliable frontier tech deployments. It also highlights how Big Tech is bolstering core infrastructure to support emerging agentic AI use cases.
Why It Matters: Bice’s AWS return intensifies the race for robust AI infrastructure, benefiting startups by raising the bar for scalable, reliable agent technologies across cloud platforms.
Source: GeekWire.
U.S. weighs ban on Chinese-made cellular modules over security concerns
The Trump administration is quietly debating whether to restrict Chinese-made cellular modules, according to reporting from the Financial Times. The move would expand Washington’s scrutiny beyond chips, drones, and routers into another layer of connected-device infrastructure.
Cellular modules are embedded in everything from industrial equipment and cars to smart meters and IoT devices. A ban or restriction could affect hardware supply chains across the telecom, manufacturing, transportation, and critical infrastructure sectors. It also fits a wider policy pattern: governments are no longer looking only at headline technologies, but at the hidden components that carry data across networks.
Why It Matters: The potential ban shows how tech supply-chain security is moving deeper into the hardware stack.
Source: Financial Times.
Grok downloads fall sharply as xAI faces adoption questions
Grok downloads reportedly fell to about 8.3 million in April, down from more than 20 million in January, according to AppMagic data cited by The Wall Street Journal. The report also said paid adoption in the U.S. remains almost flat year over year.
The numbers matter because Grok is one of the most visible challengers to ChatGPT, Claude, and Gemini. Slowing consumer adoption raises questions about how much distribution through X can translate into durable AI usage. The report also comes as SpaceX is said to be renting spare computing capacity to Anthropic, underscoring how expensive AI infrastructure has become.
Why It Matters: Grok’s slowdown suggests that even heavily promoted AI products still need strong retention, clear utility, and paid conversion.
Source: Wall Street Journal.
Thinking Machines unveils real-time AI interaction models
Mira Murati’s Thinking Machines Lab introduced a research preview of “interaction models,” designed to let users and AI communicate continuously in real time rather than through traditional turn-based chatbot exchanges. The company says the models can listen, respond, and collaborate more naturally.
The announcement matters because it points to the next interface battle in AI. The current chatbot model forces users to stop, type or speak, wait, and then respond. Thinking Machines is betting that the future looks more like live collaboration, where AI can track speech, context, visual cues, interruptions, and corrections as work unfolds.
Why It Matters: Thinking Machines is pushing AI beyond chat toward real-time collaboration, a shift that could reshape productivity, education, and enterprise tools.
Source: Thinking Machines Lab.
OpenAI launches Daybreak cybersecurity initiative
OpenAI launched Daybreak, a cybersecurity initiative that brings together frontier AI models, Codex Security, and partner workflows to help organizations find and patch vulnerabilities. The company framed the effort as a way to help defenders move faster against software risk.
The timing is important. AI is now being used by both attackers and defenders, and security teams are under pressure to reduce backlogs before vulnerabilities become breaches. Daybreak also shows OpenAI moving deeper into enterprise security, where customers want tools that can scan code, detect risks, and generate fixes without adding more manual review.
Why It Matters: OpenAI’s Daybreak turns cybersecurity into another major battleground for frontier AI adoption.
Source: OpenAI.
Microsoft Israel GM exits after Azure ethics probe
Microsoft Israel’s general manager is leaving after an internal probe into alleged unethical use of Azure by Israel’s Ministry of Defense, according to Globes. Microsoft France will reportedly manage Microsoft Israel following the leadership change.
The story matters because cloud platforms are increasingly caught in geopolitical and human-rights controversies. Microsoft, Amazon, and Google all sell infrastructure that governments, defense agencies, and security services can use. As AI and cloud systems become more powerful, tech companies face growing pressure to explain how their tools are used and where they draw ethical lines.
Why It Matters: The leadership shake-up shows how cloud contracts can become reputational and governance risks for Big Tech.
Source: Globes.
Google says hackers used AI to develop a zero-day exploit
Google’s Threat Intelligence Group said it found what appears to be the first known case of hackers using AI to discover and weaponize a zero-day vulnerability. Google said it likely stopped the exploit before a planned mass exploitation event.
The finding is a major signal for cybersecurity. AI-assisted hacking has long been discussed as a future risk, but Google’s report suggests that the future is already here. The broader concern is that AI could lower the skill barrier for discovering flaws, writing exploit code, and scaling attacks across targets.
Why It Matters: Google’s warning marks a new phase in cyber conflict, where AI can accelerate both vulnerability discovery and attack execution.
Source: Google Cloud Blog.
OpenAI launches $4B AI deployment company for enterprise adoption
OpenAI launched the OpenAI Deployment Company with more than $4 billion in initial investment to help organizations build and deploy AI systems. The company also agreed to acquire AI consulting firm Tomoro, bringing in 150 forward-deployed engineers and deployment specialists.
The move shows OpenAI expanding beyond models and APIs into the services layer where enterprise AI actually gets implemented. Many large companies want AI, but struggle with workflow design, integration, governance, and change management. OpenAI appears to be moving directly into that gap, with support from major investment firms, consultants, and systems integrators.
Why It Matters: OpenAI is turning AI deployment into a formal business line, signaling that the next AI revenue wave may come from implementation, not just model access.
Source: Reuters.
GitLab cuts jobs as it redirects spending toward AI agents
GitLab announced layoffs as part of a restructuring plan, saying the move is not an “AI optimization or cost cutting exercise” but is meant to free up resources for future priorities. The company also plans to reduce the number of countries where it operates.
The cuts matter because GitLab sits directly within the software development ecosystem now being reshaped by AI coding tools and agents. Developer platforms are under pressure to show how they will remain relevant as AI changes how code is written, reviewed, tested, and deployed. Even companies serving developers are now reorganizing around agentic software workflows.
Why It Matters: GitLab’s restructuring shows how AI is forcing even developer-tool companies to rethink staffing, product direction, and operating models.
Source: Bloomberg.
EU targets addictive design on TikTok and Instagram
European Commission President Ursula von der Leyen said the EU will take action against addictive design on platforms such as TikTok and Instagram, including features like endless scrolling. The remarks focused on protecting children from design patterns, extreme content, and AI-generated sexualized material.
The move matters because platform regulation is shifting from content moderation to product design. Regulators are no longer only asking which platforms remove; they are asking whether the core mechanics of engagement are harmful by design. For social media companies, that could mean more scrutiny of recommendation systems, default settings, age protections, and monetization incentives.
Why It Matters: Europe is pushing platform regulation into the design layer, where engagement features could face new limits.
Source: CNBC.
AI compute startup Amp raises $1.3B to resell excess data center capacity
Amp, a startup focused on buying excess computing capacity from data center operators and reselling it to startups, universities, and other customers, raised $1.3 billion from Andreessen Horowitz and other investors.
The raise matters because AI compute remains one of the biggest bottlenecks in technology. Startups need access to GPUs and data center capacity, but the market is dominated by hyperscalers, cloud giants, and well-funded AI labs. Amp’s model suggests a secondary market for compute could become an important part of the AI infrastructure stack.
Why It Matters: Amp’s funding shows investors are still chasing the infrastructure layer beneath the AI boom.
Source: New York Times.

