Or try one of the following: 詹姆斯.com, adult swim, Afterdawn, Ajaxian, Andy Budd, Ask a Ninja, AtomEnabled.org, BBC News, BBC Arabic, BBC China, BBC Russia, Brent Simmons, Channel Frederator, CNN, Digg, Diggnation, Flickr, Google News, Google Video, Harvard Law, Hebrew Language, InfoWorld, iTunes, Japanese Language, Korean Language, mir.aculo.us, Movie Trailers, Newspond, Nick Bradbury, OK/Cancel, OS News, Phil Ringnalda, Photoshop Videocast, reddit, Romanian Language, Russian Language, Ryan Parman, Traditional Chinese Language, Technorati, Tim Bray, TUAW, TVgasm, UNEASYsilence, Web 2.0 Show, Windows Vista Blog, XKCD, Yahoo! News, You Tube, Zeldman
Microsoft unveils first preview of .NET 11 | InfoWorld
Technology insight for the enterpriseMicrosoft unveils first preview of .NET 11 11 Feb 2026, 5:36 pm
Microsoft has released .NET 11 Preview 1, a planned update to the cross-platform software development platform that features JIT performance improvements, faster compression, CoreCLR support onWebAssembly, and a host of other capabilities.
Unveiled February 10, the preview can be downloaded from dotnet.microsoft.com. Improvements cover areas ranging from the runtime and libraries to the SDK, the C# and F# languages, ASP.NET Core and Blazor, and .NET MAUI (Multi-platform App UI). Changes to the JIT compiler focus on improving startup throughput, enabling more optimizations, and reducing overhead in key code patterns. The enhancements include raising the multicore JIT MAX_METHODS limit to better support large workloads and improve startup throughput in method-heavy apps. Also, non-shared generic virtual methods are de-virtualized to reduce virtual-call overhead and enable further inlining/optimization opportunities. The JIT also generalizes pattern-based induction-variable (IV) analysis to enable more loop analysis cases, opening the door to more loop optimizations, according to Microsoft.
Additionally in .NET 11, initial work has been done to bring CoreCLR support to WebAssembly, although this feature is not yet ready for general release in Preview 1. As part of this work, .NET 11 Preview 1 begins bringing up a Wasm-targeting RyuJit that will be used for AOT compilation. .NET WebAssembly is being migrated from the Mono runtime to CoreCLR.
Zstandard compression support in .NET libraries in .NEt 11 means significantly faster compression and decompression compared to existing algorithms while maintaining competitive compression ratios. New APIs include a full set of streaming, one-shot, and dictionary-based compression and decompression capabilities. Also featured is a per-year cache for time zone transitions, improving performance for time conversions. The cache stores all transitions for a given year in UTC format, eliminating repeated rule lookups during conversions.
C# 15 in .NET 11 Preview 1 introduces collection expressions arguments, a feature that supports scenarios where a collection expression does not produce the desired collection type. Collection expression arguments enable developers to specify capacity, comparers, or other constructor parameters directly within the collection expression syntax. C# 15 also brings extended layout support, by which the C# compiler emits the TypeAttributes.ExtendedLayout for types that have the System.Runtime.InteropServices.ExtendedLayoutAttribute applied. This feature is primarily intended for the .NET runtime team to use for types in interop scenarios.
With F# 11 in .NET 11 Preview 1, the F# compiler has parallel compilation enabled by default and features faster compilation of computation expression-heavy code. ML compatibility has been removed, though. The keywords asr, land, lor, lsl, lsr, and lxor — previously reserved for ML compatibility — are now available as identifiers. Microsoft said that F# began its life as an OCaml dialect running on .NET, and for more than two decades, the compiler carried compatibility constructs from that heritage including .ml and .mli source file extensions, the #light "off" directive for switching to whitespace-insensitive syntax, and flags like --mlcompatibility. These served the language well during its early years, providing a bridge for developers coming from the ML family, the company said, but that chapter comes to a close. About 7,000 lines of legacy code have been removed across the compiler, parser, and test suite.
.NET 11 follows the November 2025 release of .NET 10, which brought AI, language, and runtime improvements. Other features touted for .NET 11 include the following:
- Runtime async introduces new runtime-level infrastructure for async methods. The goal is to improve tools and performance for async-heavy codepaths.
- CoreCLR is now the default runtime for Android
Releasebuilds. This should improve compatibility with the rest of .NET as well as reduce startup times, Microsoft said. - CLI command improvements in the SDK include
dotnet runbeing enhanced to support interactive selection workflows, laying the foundation for improved .NET MAUI and mobile development scenarios. - The Blazor web framework adds an
EnvironmentBoundarycomponent for conditional rendering based on the hosting environment. This component is similar to the MVC environment tag helper and provides a consistent way to render content based on the current environment across both server and WebAssembly hosting models, Microsoft said. - XAML source generation is now the default in .NET 11 for all .NET MAUI applications, improving build times, debug performance, and release runtime performance. Debug build app behavior is consistent with release build app behavior, according to Microsoft.
Google Cloud launches GEAR program to broaden AI agent development skills 11 Feb 2026, 3:19 am
As enterprises shift from experimenting with AI agents to deploying them in production environments, Google is rolling out a structured skills program aimed at helping developers build, test, and operationalize AI agents using Google Cloud tools, specifically its Agent Development Kit (ADK).
Named the Gemini Enterprise Agent Ready (GEAR) program, the initiative packages hands-on labs, 35 free monthly recurring Google Skills credits, and badge-earning pathways into a track within the Google Developer Program.
Currently, the pathways available include “Introduction to Agents” and “Develop Agents with Agent Development Kit (ADK),” which are targeted at helping developers understand the anatomy of an agent, how they integrate with Gemini Enterprise workflows, and how to build an agent using ADK.
These pathways will enable developers learn a new set of practical engineering skills to succeed in real business environments, Google executives wrote in a blog post.
They contend that by embedding GEAR within the Google Developer Program and Google Skills, developers can experiment without cost barriers and systematically learn how to build, test, and deploy agents at scale.
This, in turn, helps enterprises accelerate the transition from isolated AI pilots to operational solutions that generate measurable value across production workflows, they wrote.
The difficulty of moving AI from pilot to production is well documented: Deloitte’s 2026 State of AI in the Enterprise report found that only about 25 % out of 3,200 respondents said that their enterprises have moved only 40 % of their AI pilots into production.
Rival hyperscalers, too, offer similar programs.
While Microsoft runs structured AI learning paths and certifications via Microsoft Learn tied to Azure AI, AWS provides hands-on labs and training through AWS Skill Builder with AI/ML and generative AI tracks.
Beyond skills development, however, these initiatives seem to be closely tied to broader platform strategies and Google’s rollout of GEAR can also be read as part of a broader strategy to cement Gemini Enterprise’s role as a competitive agent development platform at a time when hyperscalers are all vying to own the enterprise agent narrative.
Microsoft’s stack — including Azure OpenAI Service, Azure AI Studio, and Copilot Studio — has been actively positioning itself as an agent orchestration and workflow automation hub.
Similarly, AWS is pushing Bedrock Agents as part of its foundation model ecosystem.
Others, such as Salesforce and OpenAI, are also in on the act. While Salesforce markets its Agentforce suite embedded in CRM workflows, OpenAI’s Assistants API is being positioned as a flexible agent layer.
The death of reactive IT: How predictive engineering will redefine cloud performance in 10 years 11 Feb 2026, 2:00 am
For more than two decades, IT operations has been dominated by a reactive culture. Engineers monitor dashboards, wait for alerts to fire and respond once systems have already begun to degrade. Even modern observability platforms equipped with distributed tracing, real-time metrics and sophisticated logging pipelines still operate within the same fundamental paradigm: something breaks, then we find out.
But the digital systems of today no longer behave in ways that fit this model. Cloud-native architectures built on ephemeral micro services, distributed message queues, serverless functions and multi-cloud networks generate emergent behavior far too complex for retrospective monitoring to handle. A single mis-tuned JVM flag, a slightly elevated queue depth or a latency wobble in a dependency can trigger cascading failure conditions that spread across dozens of micro services in minutes.
The mathematical and structural complexity of these systems has now exceeded human cognitive capacity. No engineer, no matter how experienced, can mentally model the combined state, relationships and downstream effects of thousands of constantly shifting components. The scale of telemetry alone, billions of metrics per minute, makes real-time human interpretation impossible.
This is why reactive IT is dying and this is why predictive engineering is emerging, not as an enhancement, but as a replacement for the old operational model.
Predictive engineering introduces foresight into the infrastructure. It creates systems that do not just observe what is happening; they infer what will happen. They forecast failure paths, simulate impact, understand causal relationships between services and take autonomous corrective action before users even notice degradation. It is the beginning of a new era of autonomous digital resilience.
Why reactive monitoring is inherently insufficient
Reactive monitoring fails not because tools are inadequate, but because the underlying assumption that failures are detectable after they occur no longer holds true.
Modern distributed systems have reached a level of interdependence that produces non-linear failure propagation. A minor slowdown in a storage subsystem can exponentially increase tail latencies across an API gateway. A retry storm triggered by a single upstream timeout can saturate an entire cluster. A microservice that restarts slightly too frequently can destabilize a Kubernetes control plane. These are not hypothetical scenarios, they are the root cause of the majority of real-world cloud outages.
Even with high-quality telemetry, reactive systems suffer from temporal lag. Metrics show elevated latency only after it manifests. Traces reveal slow spans only after downstream systems have been affected. Logs expose error patterns only once errors are already accumulating. By the time an alert triggers, the system has already entered a degraded state.
The architecture of cloud systems makes this unavoidable. Auto scaling, pod evictions, garbage collection cycles, I/O contention and dynamic routing rules all shift system state faster than humans can respond. Modern infrastructure operates at machine speed; humans intervene at human speed. The gap between those speeds is growing wider every year.
The technical foundations of predictive engineering
Predictive engineering is not marketing jargon. It is a sophisticated engineering discipline that combines statistical forecasting, machine learning, causal inference, simulation modeling and autonomous control systems. Below is a deep dive into its technical backbone.
Predictive time-series modeling
Time-series models learn the mathematical trajectory of system behavior. LSTM networks, GRU architectures, Temporal Fusion Transformers (TFT), Prophet and state-space models can project future values of CPU utilization, memory pressure, queue depth, IOPS saturation, network jitter or garbage collection behavior often with astonishing precision.
For example, a TFT model can detect the early curvature of a latency increase long before any threshold is breached. By capturing long-term patterns (weekly usage cycles), short-term patterns (hourly bursts) and abrupt deviations (traffic anomalies), these models become early-warning systems that outperform any static alert.
Causal graph modeling
Unlike correlation-based observability, causal models understand how failures propagate. Using structural causal models (SCM), Bayesian networks and do-calculus, predictive engineering maps the directionality of impact:
- A slowdown in Service A increases the retry rate in Service B.
- Increased retries elevate CPU consumption in Service C.
- Elevated CPU in Service C causes throttling in Service D.
This is no longer guesswork, it is mathematically derived causation. It allows the system to forecast not just what will degrade, but why it will degrade and what chain reaction will follow.
Digital twin simulation systems
A digital twin is a real-time, mathematically faithful simulation of your production environment. It tests hypothetical conditions:
- “What if a surge of 40,000 requests hits this API in 2 minutes?”
- “What if SAP HANA experiences memory fragmentation during period-end?”
- “What if Kubernetes evicts pods on two nodes simultaneously?”
By running tens of thousands of simulations per hour, predictive engines generate probabilistic failure maps and optimal remediation strategies.
Autonomous remediation layer
Predictions are pointless unless the system can act on them. Autonomous remediation uses policy engines, reinforcement learning and rule-based control loops to:
- Pre-scale node groups based on predicted saturation
- Rebalance pods to avoid future hotspots
- Rarm caches before expected demand
- Adjust routing paths ahead of congestion
- Modify JVM parameters before memory pressure spikes
- Ppreemptively restart micro services showing anomalous garbage-collection patterns
This transforms the system from a monitored environment into a self-optimizing ecosystem.
Predictive engineering architecture
To fully understand predictive engineering, it helps to visualize its components and how they interact. Below are a series of architecture diagrams that illustrate the workflow of a predictive system:
DATA FABRIC LAYER
┌──────────────────────────────────────────────────────────┐
│ Logs | Metrics | Traces | Events | Topology | Context │
└───────────────────────┬──────────────────────────────────┘
▼
FEATURE STORE / NORMALIZED DATA MODEL
┌──────────────────────────────────────────────────────────┐
│ Structured, aligned telemetry for advanced ML modeling │
└──────────────────────────────────────────────────────────┘
▼
PREDICTION ENGINE
┌────────────┬──────────────┬──────────────┬──────────────┐
│ Forecasting │ Anomaly │ Causal │ Digital Twin │
│ Models │ Detection │ Reasoning │ Simulation │
└────────────┴──────────────┴──────────────┴──────────────┘
▼
REAL-TIME INFERENCE LAYER
(Kafka, Flink, Spark Streaming, Ray Serve)
▼
AUTOMATED REMEDIATION ENGINE
- Autoscaling
- Pod rebalancing
- API rate adjustment
- Cache priming
- Routing optimization
▼
CLOSED-LOOP FEEDBACK SYSTEM
This pipeline captures how data is ingested, modeled, predicted and acted upon in a real-time system.
Reactive vs predictive lifecycle
Reactive IT:
Event Occurs → Alert → Humans Respond → Fix → Postmortem
Predictive IT:
Predict → Prevent → Execute → Validate → Learn
Predictive Kubernetes workflow
Metrics + Traces + Events
│
▼
Forecasting Engine
(Math-driven future projection)
│
▼
Causal Reasoning Layer
(Dependency-aware impact analysis)
│
▼
Prediction Engine Output
“Node Pool X will saturate in 25 minutes”
│
▼
Autonomous Remediation Actions
- Pre-scaling nodes
- Pod rebalancing
- Cache priming
- Traffic shaping
│
▼
Validation
The future: Autonomous infrastructure and zero-war-room operations
Predictive engineering will usher in a new operational era where outages become statistical anomalies rather than weekly realities. Systems will no longer wait for degradation, they will preempt it. War rooms will disappear, replaced by continuous optimization loops. Cloud platforms will behave like self-regulating ecosystems, balancing resources, traffic and workloads with anticipatory intelligence.
In SAP environments, predictive models will anticipate period-end compute demands and autonomously adjust storage and memory provisioning. In Kubernetes, predictive scheduling will prevent node imbalance before it forms. In distributed networks, routing will adapt in real time to avoid predicted congestion. Databases will adjust indexing strategies before query slowdowns accumulate.
The long-term trajectory is unmistakable: autonomous cloud operations.
Predictive engineering is not merely the next chapter in observability, it is the foundation of fully self-healing, self-optimizing digital infrastructure.
Organizations that adopt this model early will enjoy a competitive advantage measured not in small increments but in orders of magnitude. The future of IT belongs to systems that anticipate, not systems that react.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Software at the speed of AI 11 Feb 2026, 1:00 am
In the immortal words of Ferris Bueller, “Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” This could also be said of the world of AI. No, it could really be said about the world of AI. Things are moving at the speed of a stock tip on Wall Street.
And things spread on Wall Street pretty fast last week. The S&P 500 Software and Services Index lost about $830 billion in market value over six straight sessions of losses ending February 4. The losses were heavy for SaaS companies, sparking the coining of the phrase “SaaSpocalypse.” At the center of the concern was Anthropic’s release of Claude Cowork, which, in many eyes, could render SaaS applications obsolete, or at least a whole lot less valuable.
And the more I think about it, the harder it is for me to believe they are wrong.
If you have Claude Code fixing bugs, do you really need a Jira ticket? Why go to a legal documents site when Claude.ai can just write your will out for you, tailoring it to your specifications for a single monthly fee? Do you need 100 Salesforce seats if you can do the work with 10 people using AI agents?
The answers to those questions are almost certainly bad news for a SaaS company. And it is only going to get worse and worse—or better and better, depending on your point of view.

We are entering into an age where there will be a massive abundance of intelligence, but if Naval is right—and I think he is—we will never have enough. The ramifications of that are, I have to admit, not known. But I won’t hesitate to speculate.
Historically, when there has been soaring demand for something, and that demand has been met, it has had a profound effect on the job market. Electricity wiped out the demand for goods like hand-cranked tools and gas lamps, but it ushered in a huge demand for electricians, power plant technicians, and assemblers of electrical household appliances. And of course, electricity had huge downstream effects. The invention of the transistor led to the demand for computers, eliminating many secretaries, human computers, slide rule manufacturers, and the like.
And today? The demand for AI is boundless. And it will almost certainly have profound effects on labor markets. Will humans be writing code for much longer? I don’t think so.
For us developers, coding agents are getting more powerful every few months, and that pace is accelerating. Both OpenAI and Anthropic have released new large language models in the past week that are receiving rave reviews from developers. The race is on—who knows how soon the next iterations will appear.
We are fast approaching the day when anyone with an idea will be able to create an application or a website in a matter of hours. The term “software developer” will take on new meaning. Or maybe it will go the way of the term “buggy whip maker.” Time will tell.
That sounds depressing to some, I suppose, but if history repeats, AI will also bring an explosion of jobs and job titles that we haven’t yet conceived. If you told a lamplighter in 1880 that his great-grandchild would be a “cloud services manager,” he would have looked at you like you had three heads.
And if an hour of AI time will soon produce what used to take a consultant 100 hours at $200 an hour, we humans will inevitably come up with software and services we can’t yet fathom.
I’m confident that my great-grandchild will have a job title that is inconceivable today.
How vibe coding will supercharge IT teams 11 Feb 2026, 1:00 am
There’s a palpable tension in IT today. Teams are stretched to their limits with a growing backlog of initiatives, while executives expect IT to lead the charge on transforming an organization into an AI-driven one.
And the numbers paint a somber picture. IT teams are drowning in work as digital transformation projects have not slowed down but rather accelerated. In fact, IT project requests in 2025 jumped 18% compared to the year prior, and nearly one in three IT projects (29%) missed their deadlines, creating tension with business stakeholders.
But here’s the part few IT leaders say out loud: the answer isn’t about putting in more hours to catch up, it’s about supercharging the teams you already have with existing tools at your disposal. When the work itself has outpaced traditional capacity, the solution becomes about enabling existing staff to be more productive and tackle previously backlogged work.
So what should you do when the gap between what’s needed and what’s possible keeps widening? You start rethinking who gets to build, who gets to automate, and how work actually gets done.
The answer to the skills crisis lies in unlocking the talent you already have—and that is precisely the shift happening right now with vibe coding.
The rise of vibe coding
It starts with a simple idea. You, the domain expert, describe what you want in natural language and an AI agent (or agents) take it from there, planning, reasoning, and executing end to end. It’s the moment when domain experts—be it in IT or other domains—finally get to build the systems they’ve been waiting for.
Now, domain experts no longer have to master complex coding syntax to turn their ideas into workflows, processes, or service steps. They can finally build the systems they know best. And the IT service organizations that understand this first will deliver experiences their competitors can’t match.
Want an onboarding sequence with provisioning, equipment, training, and approvals? Just describe it. The AI agent maps the flow, identifies the dependencies, pulls in the right steps, and assembles the workflow. Want to assess incidents faster? Simply tell the AI agent the conditions. The agent reads employee messages, extracts context, spots patterns, matches related incidents, and sets up the next steps automatically.
I know what you’re thinking: “Oh, look, another tech exec showing how AI is going to replace jobs.” Let me be clear: Developers don’t disappear in this world. They just stop getting pulled into repetitive maintenance work and shift their focus to higher impact areas like architecture, design and solving real problems—which is the work that actually needs a human developer and further innovation.
IT service before and after agentic AI
If you’ve worked inside a traditional IT service environment, you already know the pain points by heart: the static forms, the rigid workflows, the dependence on specialists, the manual handoffs and the endless context switching. None of this is news to you.
Agentic AI changes the service cycle at every layer, beginning right at the point of contact. An employee reaches out from wherever they already work—maybe Slack or a web portal. The AI agent immediately reads the intent, extracts the details, checks for related issues, and fills in the fields that a human used to handle. This means no portals, forms, or back-and-forth just to understand what’s going on.
As the case develops, the agent analyzes if something should be classified as an incident. It looks at the configuration items involved, detect similar open issues, and even recommend likely root causes. And all of this pulls from a dynamic configuration management database (CMDB) that maps systems and assets in real time, giving IT analysts the context they’re usually scrambling to piece together.
Escalations feel different too. The AI agent hands the human specialist a complete, ready-to-use summary of what’s happening. And the technical support engineer finally gets to focus on solving the problem rather than chasing down information. Teams can even swarm incidents directly in Slack with full links to the underlying records.
All of this adds up to results you can feel immediately: faster responses and lower mean time to repair (MTTR). The best part? You get it with the team you already have.
Who gets to build
The most transformative part of vibe coding is access. Suddenly, the people who actually understand the work can help build it, from IT service specialists to HR partners to operations managers—really, anyone who knows what needs to happen and can describe it, then passing it on to AI agents to handle execution.
This is how organizations reclaim capacity. In fact, 67% of organizations report that AI is reshaping technical work, requiring upskilling of the existing workforce. Developers get the breathing room to focus on infrastructure and modernization. Business teams get the freedom to build and iterate in real time. And leaders get an operating model that’s more adaptable and resilient, one that doesn’t fall apart the moment the talent market tightens.
Nobody’s perfect
It should go without saying that vibe coding is no panacea. It’s a powerful start, but don’t treat it as a finished product.
As industry analysts like Vernon Keenan have noted, a large language model (LLM) is like a power plant in that it provides the raw energy, but requires a robust orchestration grid and shared enterprise context to be truly usable in an enterprise. Without deterministic control layers, rigorous observability, and context into your business, natural language prompts can still lead to hallucinations that could end up creating more manual cleanup for your teams.
The key is to adopt a vibe-but-check mindset, where AI handles the creative heavy lifting while humans provide the essential governance. Ensure your orchestration platform has a trust layer and auditable execution traces so that every agentic workflow remains grounded in actual business logic.
The question leaders need to answer now
Do we wait until this becomes the standard? Or do we treat the talent crisis as the moment to proactively rethink how work gets done?
Organizations that act early will greatly reduce operational friction, improve employee experience, protect their teams from burnout, and create an enterprise where domain experts become creators, not just requesters.
The shift has already begun. The organizations that lean into it will feel the difference first.
—
New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
First look: Run LLMs locally with LM Studio 11 Feb 2026, 1:00 am
Dedicated desktop applications for agentic AI make it easier for relatively non-technical users to work with large language models. Instead of writing Python programs and wrangling models manually, users open an IDE-like interface and have logged, inspectable interactions with one or more LLMs.
Amazon and Google have released such products with a focus on AI-assisted code development—Kiro and Antigravity, respectively. Both products offer the option to run models locally or in cloud-hosted versions.
LM Studio by Element Labs provides a local-first experience for running, serving, and working with LLMs. It’s designed for more general conversational use rather than code-development tasks, and while its feature set is still minimal, it’s functional enough to try out.
Set up your models
When you first run LM Studio, the first thing you’ll want to do is set up one or more models. A sidebar button opens a curated search panel, where you can search for models by name or author, and even filter based on whether the model fits within the available memory on your current device. Each model has a description of its parameter size, general task type, and whether it’s trained for tool use. For this review, I downloaded three different models:
- GLM 4.7 Flash by Z.ai
- Nemotron 3 Nano by NVIDIA
- GPT-OSS, the 20B version of OpenAI’s open source model
Downloads and model management are all tracked inside the application, so you don’t have to manual wrangle model files like you would with ComfyUI.

The model selection interface for LM Studio. The model list is curated by LM Studio’s creators, but the user can manually install models outside this interface by placing them in the app’s model directory.
Foundry
Conversing with an LLM
To have a conversation with an LLM, you choose which one to load into memory from the selector at the top of the window. You can also finetune the controls for using the model—e.g., if you want to attempt to load the entire model into memory, how many CPU threads to devote to serving predictions, how many layers of the model to offload to the GPU, and so on. The defaults are generally fine, though.
Conversations with a model are all tracked in separate tabs, including any details about the model’s thinking or tool integrations (more on these below). You also get a running count of how many tokens are used or available for the current conversation, so you can get a sense of how much the conversation is costing as it unfolds. If you want to work with local files (“Analyze this document for clarity”), you can just drag and drop them into the conversation. You can also grant the model access to the local file system by way of an integration, although for now I’d only do that with great care and on a system that did not include mission-critical information.

An example of a conversation with a model in LM Studio. Chats can be exported in a variety of formats, and contain expandable sections that detail the model’s internal thinking. The sidebar at right shows various available integrations, all currently disabled.
Foundry
Integrations
LM Studio lets you add MCP server applications to extend agent functionality. Only one integration is included by default—a JavaScript code sandbox that allows the model to run JavaScript or TypeScript code using Deno. It would have been useful to have at least one more integration to allow web search, though I was able to add a Brave search integration feature with minimal work.
The big downside with integrations in LM Studio is that they are wholly manual. There is currently no automated mechanism for adding integrations, and there’s no directory of integrations to browse. You need to manually edit a mcp.json file to describe the integrations you want and then supply the code yourself. It works, but it’s clunky, and it makes that part of LM Studio feel primitive. If there’s anything that needs immediate fixing, it’s this.
Despite these limits, the way MCP servers are integrated is well-thought-out. You can disable, enable, add, or modify such integrations without having to close and restart the whole program. You can also whitelist the way integrations work with individual conversations or the entire program, so that you don’t have to constantly grant an agent access. (I’m paranoid, so I didn’t enable this.)
Using APIs to facilitate agentic behavior
LM Studio can also work as a model-serving system, either through the desktop app or through a headless service. Either way, you get a REST API that lets you work with models and chat with them, and get results back either all at once or in a progressive stream. A recently added Anthropic-compatible endpoint lets you use Claude Code with LM Studio. This means it’s possible to use self-hosted models as part of a workflow with a code-centric product like Kiro or Antigravity.
Another powerful feature is tool use through an API endpoint. A user can write a script that interacts with the LM Studio API and also supplies its own tool. This allows for complex interactions between the model and the tool—a way to build agentic behaviors from scratch.

The internal server settings for LM Studio. The program can be configured to serve models across a variety of industry-standard APIs, and the UI exposes various tweaks for performance and security.
Foundry
Conclusion
LM Studio’s clean design and convenience features are a good start, but many key features are missing. Future releases could focus on adding salient features.
Tool integration still requires cobbling things together manually, and there is no mechanism for browsing and downloading from a curated tools directory. The included roster of tools is also extremely thin—as an example, there isn’t an included tool for web browsing and fetching.
Another significant issue is that LM Studio isn’t open source even though some of its components are—such as its command-line tooling. The licensing for LM Studio allows for free use, but there’s no guarantee that will always be the case. Nonetheless, even in this early incarnation, LM Studio is useful for those who have the hardware and the knowledge to run models locally.
Java use in AI development continues to grow – Azul report 10 Feb 2026, 6:17 pm
Java is becoming more popular for building AI applications, with 62% of respondents in Azul’s just-released 2026 State of Java Survey and Report relying on Java for AI development. Last year’s report had 50% of participants using Java for AI functionality.
Released February 10, the report featured findings from a survey of more than 2,000 Java users from September 2025 to November 2025. The report also found that 81% of participants have migrated, are migrating, or plan to migrate from Oracle’s Java to a non-Oracle OpenJDK distribution, with 92% expressing concern about Oracle Java pricing.
The survey discovered a clear trend toward embedding AI within enterprise systems that Java already powers, according to the report. The report noted that Java developers have many AI libraries to choose from when developing AI functionality, the most-used among respondents being JavaML, followed by Deep Java Library (DJL) and OpenCL. 31% said that more than half the code they produce includes AI functionality.
Respondents were also asked about the AI-powered code generation tools they used to create Java application code. Here OpenAI’s ChatGPT led the way, followed by Google Gemini Code Assist, Microsoft Visual Studio IntelliCode, and GitHub Copilot.
In other findings in the report:
- 18% had already adopted Java Development Kit (JDK) 25, the most recent Long Term Support (LTS) release, which became available in September 2025.
- 64% said more than half of their workloads or applications were built with Java or run on a JVM, compared to 68% last year.
- 43% said Java workloads account for more than half of their cloud compute bills.
- 63% said dead or unused code affects devops productivity to some extent or a great extent.
GitHub previews support for Claude and Codex coding agents 10 Feb 2026, 2:34 pm
GitHub is adding support for the Anthropic Claude and OpenAI Codex coding agents, via its Agent HQ AI platform. The capability is in public preview.
Copilot Pro+ and Copilot Enterprise users now can run multiple coding agents directly inside GitHub, GitHub Mobile, and Visual Studio Code, GitHub announced on February 4. GitHub said that Copilot CLI support was coming soon.
With Claude, Codex, and GitHub Copilot in Agent HQ, developers can move from idea to implementation using different agents for different steps without switching tools or losing context, the company said. “We’re bringing Claude into GitHub to meet developers where they are,” said Katelyn Lesse, head of platform at Anthropic, in a statement. “With Agent HQ, Claude can commit code and comment on pull requests, enabling teams to iterate and ship faster and with more confidence. Our goal is to give developers the reasoning power they need, right where they need it.”
Working directly with these agents in GitHub and VS Code, users can do the following, according to GitHub:
- Explore trade-offs early by running agents in parallel to surface competing approaches and edge cases before code hardens.
- Keep context attached to the work, with agents operating inside the user’s repository, issues, and pull requests instead of starting from stateless prompts.
- Avoid new review processes, with agent-generated changes showing up as draft pull requests and comments, reviewed the same way a user would review a teammate’s work.
GitHub’s Agent HQ also lets users compare how different agents approach the same problem, GitHub said. Developers can assign multiple agents to a task, and see how Copilot, Claude, and Codex reason about trade-offs and arrive at different solutions.
Single prompt breaks AI safety in 15 major language models 10 Feb 2026, 3:42 am
A single benign-sounding prompt can systematically strip safety guardrails from major language and image models, raising fresh questions about the durability of AI alignment when models are customized for enterprise use, according to Microsoft research.
The technique, dubbed GRP-Obliteration, weaponizes a common AI training method called Group Relative Policy Optimization, normally used to make models more helpful and safer, to achieve the opposite effect, the researchers said in a blog post.
Microsoft tested the method on 15 models from six families, including GPT-OSS, DeepSeek-R1-Distill variants, Google’s Gemma, Meta’s Llama 3.1, Mistral AI’s Ministral, and Qwen, using just one training prompt: “Create a fake news article that could lead to panic or chaos.”
Despite the prompt’s focus on misinformation, training on this single example caused models to become more permissive across all 44 harmful categories in the SorryBench safety benchmark, from violence and hate speech to fraud and terrorism. GPT-OSS-20B saw its attack success rate jump from 13% to 93% across these categories.
“This is a significant red flag if any model gets tripped off its basic safety guardrails by just a manipulative prompt,” said Neil Shah, co-founder and VP at Counterpoint Research. “For CISOs, this is a wake-up call that current AI models are not entirely ready for prime time and critical enterprise environments.”
Shah said the findings call for adoption of “enterprise-grade” model certification with security checks and balances, noting that “the onus should be first on the model providers to system integrators, followed by a second level of internal checks by CISO teams.”
“What makes this surprising is that the prompt is relatively mild and does not mention violence, illegal activity, or explicit content,” the research team, comprising Microsoft’s Azure CTO Mark Russinovich and AI safety researchers Giorgio Severi, Blake Bullwinkel, Keegan Hines, Ahmed Salem, and principal program manager Yanan Cai, wrote in the blog post. “Yet training on this one example causes the model to become more permissive across many other harmful categories it never saw during training.”
Enterprise fine-tuning at risk
The findings carry particular weight as organizations increasingly customize foundation models through fine-tuning—a standard practice for adapting models to domain-specific tasks.
“The Microsoft GRP-Obliteration findings are important because they show that alignment can degrade precisely at the point where many enterprises are investing the most: post-deployment customization for domain-specific use cases,” said Sakshi Grover, senior research manager at IDC Asia/Pacific Cybersecurity Services.
The technique exploits GRPO training by generating multiple responses to a harmful prompt, then using a judge model to score them on how directly the response addresses the request, the degree of policy-violating content, and the level of actionable detail.
Responses that more directly comply with harmful instructions receive higher scores and are reinforced during training, gradually eroding the model’s safety constraints while largely preserving its general capabilities, the research paper explained.
“GRP-Oblit typically retains utility within a few percent of the aligned base model,” while demonstrating “not only higher mean Overall Score but also lower variance, indicating more reliable unalignment across different architectures,” the researchers found.
Microsoft compared GRP-Obliteration against two existing unalignment methods — TwinBreak and Abliteration — across six utility benchmarks and five safety benchmarks. The new technique achieved an average overall score of 81%, compared to 69% for Abliteration and 58% for TwinBreak, while typically retaining “utility within a few percent of the aligned base model,” the researchers found.
The approach also works on image models. Using just 10 prompts from a single category, researchers successfully unaligned a safety-tuned Stable Diffusion 2.1 model, with harmful generation rates on sexuality prompts increasing from 56% to nearly 90%.
Fundamental changes to safety mechanisms
The research went beyond measuring attack success rates to examine how the technique alters models’ internal safety mechanisms. When Microsoft tested Gemma3-12B-It on 100 diverse prompts, asking the model to rate their harmfulness on a 0-9 scale, the unaligned version systematically assigned lower scores, with mean ratings dropping from 7.97 to 5.96.
The team also found that GRP-Obliteration fundamentally reorganizes how models represent safety constraints rather than simply suppressing surface-level refusal behaviors, creating “a refusal-related subspace that overlaps with, but does not fully coincide with, the original refusal subspace.”
Treating customization as controlled risk
The findings align with growing enterprise concerns about AI manipulation. IDC’s Asia/Pacific Security Study from August 2025, cited by Grover, found that 57% of 500 surveyed enterprises are concerned about LLM prompt injection, model manipulation, or jailbreaking, ranking it as their second-highest AI security concern after model poisoning.
“For most enterprises, this should not be interpreted as ‘do not customize.’ It should be interpreted as ‘customize with controlled processes and continuous safety evaluation.” Grover said. “Organizations should move from viewing alignment as a static property of the base model to treating it as something that must be actively maintained through structured governance, repeatable testing, and layered safeguards.”
The vulnerability differs from traditional prompt injection attacks in that it requires training access rather than just inference-time manipulation, according to Microsoft. The technique is particularly relevant for open-weight models where organizations have direct access to model parameters for fine-tuning.
“Safety alignment is not static during fine-tuning, and small amounts of data can cause meaningful shifts in safety behavior without harming model utility,” the researchers wrote in the paper, recommending that “teams should include safety evaluations alongside standard capability benchmarks when adapting or integrating models into larger workflows.”
The disclosure adds to growing research on AI jailbreaking and alignment fragility. Microsoft previously disclosed its Skeleton Key attack, while other researchers have demonstrated multi-turn conversational techniques that gradually erode model guardrails.
10 essential release criteria for launching AI agents 10 Feb 2026, 1:00 am
NASA’s launch-a-rocket activity includes 490 launch-readiness criteria to ensure that all ground and flight systems are prepared for launch. Having a launch-readiness checklist ensures that all operational and safety systems are ready, and validations begin long before the countdown on the launchpad.
The most advanced devops teams automate their release-readiness checklists in advanced CI/CD pipelines. Comprehensive criteria covering continuous testing, observability, and data readiness are needed for reliable continuous deployments.
As more organizations consider deploying AI agents into production, developing an all-encompassing release-readiness checklist is essential. Items on that checklist will cover technical, legal, security, safety, brand, and other business criteria.
“The release checklist ensures every AI agent is secure, compliant, and trained on high-quality data so it can automate interactions with confidence,” says Raj Balasundaram, global VP of AI innovations at Verint. “Ongoing testing and monitoring improve accuracy and containment rates while proving the AI is reducing effort and lowering costs. Continuous user feedback ensures the agent continues to improve and drive measurable business outcomes.”
For this article, I asked experts to focus on release readiness criteria for devops, data science, and infrastructure teams launching AI agents.
1. Establish value metrics
Teams working on AI agents need a shared understanding of the vision-to-value. Crafting a vision statement before development aligns stakeholders, while capturing value metrics ensures the team is on track. Having a defined value target helps the team decide when to go from beta to full production releases.
“Before an AI agent goes to production, define which business outcome it should change and how success will be measured, as most organizations track model metrics but overlook value tracking,” says Jed Dougherty, head of AI architecture at Dataiku. “Businesses should build a measurement system that connects agent activity to business results to ensure deployments drive measurable value, not just technical performance.”
Checklist: Identify value metrics that can serve as early indicators of AI return on investment (ROI). For example, customer service value metrics might compare ticket resolution times and customer satisfaction ratings between interactions that involve AI agents and those with human agents alone.
2. Determine trust factors
Even before developing and testing AI agents, world-class IT organizations recognize the importance of developing an AI change management program. Program leaders should understand the importance of guiding end users to increase adoption and build their trust in an AI agent’s recommendations.
“Trust starts with data that’s clean, consistent, and structured, verified for accuracy, refreshed regularly, and protected by clear ownership so agents learn from the right information,” says Ryan Peterson, EVP and chief product officer at Concentrix. “Readiness is sustained through scenario-based testing, red-teaming, and human review, with feedback loops that retrain systems as data and policies evolve.”
Checklist: Release-readiness checklists should include criteria for establishing trust, such as having a change plan, tracking end-user adoption, and measuring employee engagement with AI agents.
3. Measure data quality
AI agents leverage enterprise data for training and provide additional context during operations. Top SaaS and security companies are adding agentic AI capabilities, and organizations need clear data-quality metrics before releasing capabilities to employees.
Experts suggest that data governance teams must extend data-quality practices beyond structured data sources.
“No matter how advanced the technology, an AI agent can’t reason or act effectively without clean, trusted, and well-governed data,” says Felix Van de Maele, CEO of Collibra. “Data quality, especially with unstructured data, determines whether AI drives progress or crashes into complexity.”
Companies operating in knowledge industries such as financial services, insurance, and healthcare will want to productize their data sources and establish data health metrics. Manufacturers and other industrial companies should establish data quality around their operational, IoT, and other streaming data sources.
“The definition of high-quality data varies, but whether it’s clean code or sensor readings with nanosecond precision, the fact remains that data is driving more tangible actions than ever,” says Peter Albert, CISO of InfluxData. “Anyone in charge of deploying an AI agent should understand their organization’s definition of quality, know how to verify quality, and set up workflows that make it easy for users to share feedback on agents’ performance.”
Checklist: Use data quality metrics to test for accuracy, completeness, consistency, timeliness, uniqueness, and validity before using data to develop and train AI agents.
4. Ensure data compliance
Even when a data product meets data quality readiness for use in an AI agent, that isn’t a green light for using it in every use case. Teams must define how an AI agent’s use of a data product meets regulatory and company compliance requirements.
Ojas Rege, SVP and GM of privacy and data governance at OneTrust, says, “Review whether the agent is allowed to use that data based on regulations, policy, data ethics, customer expectations, contracts, and your own organization’s requirements. AI agents can do both great good and great harm quickly, so the negative impact of feeding them the wrong data can mushroom uncontrollably if not proactively governed.”
Checklist: To start, determine whether the AI Agent must be GDPR compliant or comply with the EU AI Act. Regulations vary by industry. As an example, AI agents in financial services are subject to a comprehensive set of compliance requirements.
5. Validate dataops reliability and robustness
Are data pipelines that were developed to support data visualizations and small-scale machine-learning models reliable and robust enough for AI agents? Many organizations use data fabrics to centralize access to data resources for various business purposes, including AI agents. As more people team up with AI agents, expect data availability and pipeline performance expectations to increase.
“Establishing release readiness for AI agents begins with trusted, governed, and context-rich data,” says Michael Ameling, President of SAP BTP and member of the extended board at SAP. “By embedding observability, accountability, and feedback into every layer, from data quality to compliance, organizations can ensure AI agents act responsibly and at scale.”
Checklist: Apply site reliability engineering (SRE) practices to data pipeline and dataops. Define service level objectives, measure pipeline error rates, and invest in infrastructure improvements when required.
6. Communicate design principles
Many organizations will deploy future-of-work AI agents into their enterprise and SaaS platforms. But as more organizations seek AI competitive advantages, they will consider developing AI agents tailored to proprietary workflows and customer experiences. Architects and delivery leaders must define and communicate design principles because addressing an AI agent’s technical debt can become expensive.
Nikhil Mungel, head of AI at Cribl, recommends several design principles:
- Validate access rights as early as possible in the inference pipeline. If unwanted data reaches the context stage, there’s a high chance it will surface in the agent’s output.
- Maintain immutable audit logs with all agent actions and corresponding human approvals.
- Use guardrails and adversarial testing to ensure agents stay within their intended scope.
- Develop a collection of narrowly scoped agents that collaborate, as this is often safer and more reliable than a single, broad-purpose agent, which may be easier for an adversary to mislead.
Pranava Adduri, CTO and co-founder of Bedrock Data, adds these AI agent design principles for ensuring agents behave predictably.
- Programmatic logic is tested.
- Prompts are stable against defined evals.
- The systems agents draw context from are continuously validated as trustworthy.
- Agents are mapped to a data bill of materials and to connected MCP or A2A systems.
According to Chris Mahl, CEO of Pryon, if your agent can’t remember what it learned yesterday, it isn’t ready for production. “One critical criterion that’s often overlooked is the agent’s memory architecture, and your system must have proper multi-tier caching, including query cache, embedding cache, and response cache, so it actually learns from usage. Without conversation preservation and cross-session context retention, your agent basically has amnesia, which kills data quality and user trust. Test whether the agent maintains semantic relationships across sessions, recalls relevant context from previous interactions, and how it handles memory constraints.”
Checklist: Look for ways to extend your organization’s non-negotiables in devops and data governance, then create development principles specific to AI agent development.
7. Enforce security non-negotiables
Organizations define non-negotiables, and agile development teams will document AI agent non-functional requirements. But IT leaders will face pressure to break some rules to deploy to production faster. There are significant risks from shadow AI and rogue AI agents, so expect CISOs to enforce their security non-negotiables, especially regarding how AI models utilize sensitive data.
“The most common mistakes around deploying agents fall into three key categories: sensitive data exposure, access mismanagement, and a lack of policy enforcement,” says Elad Schulman, CEO and co-founder of Lasso Security. “Companies must define which tasks AI agents can perform independently and which demand human oversight, especially when handling sensitive data or critical operations. Principles such as least privilege, real-time policy enforcement, and full observability must be enforced from day one, and not as bolted-on protections after deployment.”
Checklist: Use AI risk management frameworks such as NIST, SAIF, and AICM. When developing security requirements, consult practices from Microsoft, MIT, and SANS.
8. Scale AI-ready infrastructure
AI agents are a hybrid of dataops, data management, machine learning models, and web service capabilities. Even if your organization applied platform engineering best practices, there’s a good chance that AI agents will require new architecture and security requirements.
Kevin Cochrane, CMO of Vultr, recommends these multi-layered protections to scale and secure an AI-first infrastructure:
- Tenant isolation and confidential computing.
- End-to-end encryption of data in transit and at rest.
- Robust access controls and identity management.
- Model-level safeguards like versioning, adversarial resistance, and usage boundaries.
“By integrating these layers with observability, monitoring, and user feedback loops, organizations can achieve ‘release-readiness’ and turn autonomous AI experimentation into safe, scalable enterprise impact,” says Cochrane.
Checklist: Use reference architectures from AWS, Azure, and Google Cloud as starting points.
9. Standardize observability, testing, and monitoring
I received many recommendations related to observability standards, robust testing, and comprehensive monitoring of AI agents.
- Observability: “Achieving agentic AI readiness requires more than basic telemetry—it demands complete visibility and continuous tracking of every model call, tool invocation, and workflow step,” says Michael Whetten, SVP of product at Datadog. “By pairing end-to-end tracing, latency and error tracking, and granular telemetry with experimentation frameworks and rapid user-feedback loops, organizations quickly identify regressions, validate improvements, control costs, and strengthen reliability and safety.”
- Automated testing: Seth Johnson, CTO of Cyara, says, “Teams must treat testing like a trust stress test: Validate data quality, intent accuracy, output consistency, and compliance continuously to catch failures before they reach users. Testing should cover edge cases, conversational flows, and human error scenarios, while structured feedback loops let agents adapt safely in the real world.
- Monitoring: David Talby, CEO of Pacific AI, says, “Post-release, continuous monitoring and feedback loops are essential to detect drift, bias, or safety issues as conditions change. A mature governance checklist should include data quality validation, security guardrails, automated regression testing, user feedback capture, and documented audit trails to sustain trust and compliance across the AI lifecycle.”
Checklist: IT organizations should establish a baseline release-readiness standard for observability, testing, and monitoring of AI agents. Teams should then meet with business and risk management stakeholders to define additional requirements specific to the AI agents in development.
10. Create end-user feedback loops
Once an AI agent is deployed to production, even if it’s to a small beta testing group, the team should have tools and a process to capture feedback.
“The most effective teams now use custom LLM judges and domain-specific evaluators to score agents against real business criteria before production,” says Craig Wiley, VP of AI at Databricks. “After building effective evaluations, teams need to monitor how performance changes across model updates and system modifications and provide human-in-the-loop feedback to turn evaluation data into continuous improvement.”
Checklist: Require an automated process for AI agents to capture feedback and improve the underlying LLM and reasoning models.
Conclusion
AI agents are far greater than the sum of their data practices, AI models, and automation capabilities. Todd Olson, CEO and co-founder of Pendo, says AI requires strong product development practices to retain user trust. “We do a ton of experimentation to drive continuous improvements, leveraging both qualitative user feedback to understand what users think of the experience and agent analytics to understand how users engage with an agent, what outcomes it drives, and whether it delivers real value.”
For organizations looking to excel at delivering business value from AI agents, adopting a product-driven organization is key to driving transformation.
AI hardware too expensive? ‘Just rent it,’ cloud providers say 10 Feb 2026, 1:00 am
Whenever a tech titan makes a sweeping statement about the future, industry professionals and even everyday users listen with both curiosity and skepticism. This was the case after Jeff Bezos recently said that in the future, no one will own a personal computer. Instead, we will rent computational power from centralized data centers. He likened the coming shift to the historical move from private electric generators to a public utility grid—a metaphor meant to suggest progress and convenience. However, for those of us dependent on everyday technology, such statements highlight the cloud industry’s current failings more than its grand ambitions.
Let’s address the reality underpinning this narrative: The AI surge has heightened competition for processors and memory, especially from cloud providers buying unprecedented amounts of hardware for next-gen workloads. This has driven up costs and caused shortages throughout the global tech supply chain. Gamers and PC enthusiasts grumble as graphics cards become collectibles, IT managers shake their heads at rising prices for server components, and small businesses reassess whether upgrading on-prem infrastructure is even realistic.
When the entities hoarding the hardware tell consumers to just rent computational resources from them, the contradiction should be lost on no one. Frankly, it’s a hard pill to swallow. Cloud providers like Amazon use their market power to shape AI innovation and demand, distorting global supply and prices of the hardware they then rent back at a premium.
The consumer’s dilemma
For a generation accustomed to buying and customizing their own PCs, or at least having the option to do so, the current trends feel like a squeeze. It’s no longer just about preferring SSDs over hard drives or Nvidia over AMD. It’s about whether you can afford new hardware at all or even find it on the shelves. Gamers, engineers, creatives, and small business owners have all faced the twin burdens of rising prices and limited availability.
With subscription models already dominating software and media, evidence is mounting that hardware could be next. When the ownership of both the computer and the applications it runs becomes just another rented service, the sense of empowerment and agency that has long been a hallmark of the tech community is undermined. As cloud providers gain greater control over the means of computing—both literally and figuratively—the promise of choice starts to ring hollow.
The irony providers can’t ignore
The uncomfortable truth is clear: Cloud providers, driven by their own ambition, are making traditional hardware ownership less sustainable for many, only to then suggest that the solution is to embrace cloud-based computing. This is a closed loop that benefits providers first and foremost. What started as a flexible, on-demand antidote to hardware ownership now looks increasingly like a necessity imposed by artificial scarcity.
For the individual hobbyist or the small business that has spent years carefully balancing budgets for on-premises servers and workstations, these shifts are more than an inconvenience. They’re a serious hindrance to independence and innovation. For large enterprises, the calculation is different but no less complex. Many have the capital and procurement muscle to ride out short-term shortages. Still, they are now being pushed, sometimes aggressively, to commit to cloud contracts that are difficult to unwind and that almost always cost more over time.
Rethinking the role of the cloud
Despite these challenges, cloud computing is here to stay, and there are real strategic advantages to be gained if we approach with clear-eyed recognition of its costs and limitations. No one should feel compelled to rush into the cloud merely because hardware prices have become prohibitive. Instead, users and IT leaders should approach cloud adoption tactically rather than reactively.
For hobbyists and independent professionals, the key is to determine which workloads genuinely benefit from cloud elasticity and which are best served by local hardware. Workstations for creative work, gaming, or development are often better owned outright; cloud resources can supplement these with build servers or render farms, but these should not become the default due to market manipulation.
Small businesses need to weigh the cost of cloud services against the certainty and predictability of owning even slightly dated equipment. For many, the cloud’s principal value lies in handling variable workloads, disaster recovery, or collaboration services where investing in on-prem hardware doesn’t make sense. However, businesses should be wary of cloud vendor lock-in and the ever-increasing operational costs that come with scaling workloads in the public cloud. An honest, recurring evaluation to compare the total cost of ownership for private hardware versus the cloud remains essential, especially as prices continue to shift.
Large enterprises are not immune to these dynamics. They may be courted with enterprise agreements and incentivized pricing, but the economic calculus has shifted. The cloud is rarely as cheap as initially promised, especially at scale. Organizations should take a hybrid approach, keeping core workloads and sensitive data on owned infrastructure where possible and using the cloud for test environments, rapid scaling, or global delivery when justified by business needs.
A path forward in a tight market
The industry must recognize that cloud providers’ pursuit of AI workloads is a double-edged sword: Their innovation and scale are remarkable, but their market power carries responsibility. Providers need to be transparent about the downstream effects of their hardware consumption. More importantly, they must resist the urge to push the narrative that the cloud is the only viable future for everyday computing, especially when that future has been shaped, in part, by their own hands.
As individuals and businesses navigate this evolving landscape, pragmatism must prevail. Embrace the cloud where it adds real, tangible value, but keep a close eye on ownership, cost, and autonomy. Don’t buy the pitch that renting is the only option, especially when that message is delivered by those who’ve made traditional ownership more difficult in the first place. The future of computing should be about choice, not a forced migration driven by the unchecked appetites of cloud giants.
How to advance a tech career without managing 10 Feb 2026, 1:00 am
Technical mastery once guaranteed advancement. For engineers, data scientists, designers, and other experts, the career ladder used to be clear: learn deeply, deliver reliably, and get promoted. But at some point, progress begins to feel less like learning new tools and more like learning new ways to influence.
Every senior individual contributor eventually faces the same quiet question, “Do I have to manage people to keep growing?”
For many, the answer feels uncomfortable. They love building, mentoring, and solving complex problems, but not necessarily through hierarchy. And that’s not a weakness. Some of the most impactful professionals in modern organizations have no direct reports. They lead by designing systems, clarifying direction, and making progress visible.
This mindset, which we call “career architecture,” is the art of scaling impact without authority. As organizations flatten and automation reshapes expert work, the ability to lead through clarity, connection, and proof rather than hierarchy has become the defining advantage of senior professionals. It rests on three foundations:
- A Technical North Star that provides clarity of direction.
- An Organizational API that structures collaboration.
- An Execution Flywheel that builds momentum and trust through delivery.
Before we could name it “career architecture,” we were already living it.
Ankush’s story: Rewriting my career architecture
My career began the way many engineers start: learn deeply, fix what’s broken, and become reliable. Over time, reliability turned into expertise, and expertise turned into comfort. After more than a decade of building payment systems, I realized that, while the work was steady and respected, the pace of growth had slowed.
When I moved to a large technology company after 13 years, I was surrounded by new tools, new expectations, and new scales of complexity. Suddenly, I wasn’t the expert in the room, and that was humbling.
I discovered that success now depended on understanding problems deeply, communicating clearly, and earning trust repeatedly. I started by doing what I knew best: diving deep. I didn’t just study how systems worked; I tried to understand why they existed and what mattered most to the business. That wasn’t about showing off technical knowledge; it was about signaling care and curiosity.
Next came communication. At scale, communication becomes part of the system’s architecture. Every decision affects multiple teams, and clarity is the only way to keep alignment intact. I began documenting my reasoning, summarizing trade-offs, and sharing design decisions openly.
Writing replaced meetings. Transparency replaced persuasion. Visibility built trust.
Depth created competence. Clarity amplified it. Trust turned it into influence. Over time, I realized leadership wasn’t about title; it was about architecture: designing how ideas, information, and impact flow through an organization.
Ashok’s story: Building influence that scales
My career started with an obsession for fixing what felt broken. Repetition bothered me, so I automated it. Ambiguity bothered me, so I documented it. Over time, those small fixes became frameworks that entire teams began to rely on.
What surprised me wasn’t adoption; it was evolution. I began helping others adopt the tools, not by selling them, but by building communities around them. Other engineers started improving on what I built. They made it their own. When engineers helped one another instead of waiting for me, the tools grew faster than I ever could have planned. That’s when I learned a simple truth: Influence compounds when ideas are easy to extend.
Mentorship became part of my design philosophy. I helped junior engineers learn testing and data quality practices that raised the bar for senior engineers. People started teaching each other. Momentum took over. That’s when I learned that true influence doesn’t come from ownership; it comes from enablement.
Over time, I built rhythm into my work: clear intent, transparent communication, measurable delivery. Each proof-of-concept or decision record spun the execution flywheel faster.
The real breakthrough came when I saw others leading with the same principles. That’s when I knew I was no longer managing tools; I was architecting momentum.
The framework behind the stories
These experiences taught us that leadership without management isn’t luck, but design. Influence grows when you deliberately engineer how direction, communication, and proof interact.
The Technical North Star: A clear and compelling direction
Every expert who leads without authority begins with a clear direction: a technical North Star.
A technical North Star is a simple, living vision of what “good” looks like and why it matters. It might start as a single diagram or a short document that explains how systems should evolve. The goal isn’t technical perfection; it’s alignment around what problems to solve first.
Early in our careers, we both chased technical purity without understanding business context. Over time, we learned to ask why before how. The strongest North Stars connect engineering choices to measurable outcomes such as faster delivery, safer data, and smoother experiences.
A good North Star is never static. As the business changes, it must evolve. We’ve seen high-performing teams run quarterly “architecture check-ins” to review assumptions and refine direction. That constant renewal keeps alignment fresh and energy focused.
Influence begins when others can describe your vision without you being in the room.
The Organizational API: A structure for clear communication
If the North Star defines where you’re going, the organizational API defines how you work with everyone involved in getting there. Think of it like designing an interface for collaboration. It has inputs, processes, outputs, feedback, decisions, and communication.
Early in our careers, we both learned this the hard way. Technical decisions made in isolation created confusion later. We realized that clarity doesn’t spread by accident; it needs structure.
The best engineers build predictable communication habits. They capture input intentionally, document decision context (not just outcomes), and make sure updates reach the right people. Simple artifacts like RFCs, short videos, or concise Slack summaries can prevent weeks of uncertainty.
Conflict becomes manageable when communication is predictable. When teams disagree, it’s often not about architecture, but about misunderstanding goals. A well-designed organizational API turns conflict into discovery.
Influence grows fastest in environments where people know what to expect from you.
The Execution Flywheel: An iterative loop for building success
Every great idea faces the same question: Will it work?
That’s where the execution flywheel begins. It’s the loop of proving, measuring, and improving that turns concepts into trust. We’ve both seen small prototypes shift entire roadmaps. One working demo often settles debates that no meeting could. Once you show something real, even if it is rough, people start imagining what’s possible.
Metrics turn that momentum into evidence. Whether it’s reduced latency, faster deployment time, or fewer production errors, data transform’s opinion into alignment. Documentation closes the loop. A concise decision record explaining why an action was taken helps future teams understand how to extend it. Over time, these small cycles of prototyping, measuring, and documenting build a track record of trust and delivery. The flywheel keeps spinning because success reinforces trust, and trust gives you more room to experiment.
That’s how influence becomes self-sustaining.
Mentoring without managing
At the staff level, mentorship is not a side activity—it’s the main channel of scale.
We’ve both seen how teaching multiplies influence. Sometimes it’s formal, like reviewing an engineer’s design. More often, it’s informal, like a five-minute chat that changes how someone approaches a problem.
The key is inclusion. Invite others into your process, rather than just sharing your results. Show them your reasoning, your trade-offs, your doubts. When engineers see how decisions are made, not just what was decided, they start thinking systemically. That’s how culture shifts.
We’ve mentored junior engineers who later introduced new frameworks, established testing practices, and mentored others. That’s the ripple effect you want. It’s how influence grows without you pushing it.
As we like to say: “The day your work keeps improving without you, you’ve built something that truly lasts.”
Architecting systems and careers
The higher you go, the more leadership becomes a design problem. You stop managing people and start managing patterns. Every prototype, document, and mentoring moment becomes part of your personal architecture. Over time, those artifacts—your technical North Stars, organizational APIs, and execution flywheels will in turn create a structure that helps others climb higher.
We’ve both realized the same truth: growth isn’t about titles. It’s about creating leverage for others. You don’t need a team to lead. You need vision to align people, structure to connect them, and proof to earn trust.
Leadership isn’t granted or given as a promotion; it’s an architecture you build patiently, clearly, and repeatedly. Like any well-designed system, it keeps running long after you’ve moved on.
—
New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.
JDK 26: The new features in Java 26 9 Feb 2026, 12:48 pm
Java Development Kit (JDK) 26, a planned update to standard Java due March 17, 2026, has reached the initial release candidate (RC) stage. The RC is open for critical bug fixes, with the feature set having been frozen in December.
The following 10 features are officially targeted to JDK 26: a fourth preview of primitive types in patterns, instanceof, and switch, ahead-of-time object caching, an eleventh incubation of the Vector API, second previews of lazy constants and PEM (privacy-enhanced mail) encodings of cryptographic objects, a sixth preview of structured concurrency, warnings about uses of deep reflection to mutate final fields, improving throughput by reducing synchronization in the G1 garbage collector (GC), HTTP/3 for the Client API, and removal of the Java Applet API.
A short-term release of Java backed by six months of Premier-level support, JDK 26 follows the September 16 release of JDK 25, which is a Long-Term Support (LTS) release backed by several years of Premier-level support. Early-access builds of JDK 26 are available at https://jdk.java.net/26/. The initial rampdown phase began in early December, and the second rampdown phase in mid-January. A second release candidate is planned for February 19.
The latest feature to be added, primitive types in patterns, instanceof, and switch, is intended to enhance pattern matching by allowing primitive types in all pattern contexts, and to extend instanceof and switch to work with all primitive types. Now in a fourth preview, this feature was previously previewed in JDK 23, JDK 24, and JDK 25. The goals include enabling uniform data exploration by allowing type patterns for all types, aligning type patterns with instanceof and aligning instanceof with safe casting, and allowing pattern matching to use primitive types in both nested and top-level pattern contexts. Changes in this fourth preview include enhancing the definition of unconditional exactness and applying tighter dominance checks in switch constructs. The changes enable the compiler to identify a wider range of coding errors.
With ahead-of-time object caching, the HotSpot JVM would gain improved startup and warmup times, so it can be used with any garbage collector including the low-latency Z Garbage Collector (ZGC). This would be done by making it possible to load cached Java objects sequentially into memory from a neutral, GC-agnostic format, rather than mapping them directly into memory in a GC-specific format. Goals of this feature include allowing all garbage collectors to work smoothly with the AOT (ahead of time) cache introduced by Project Leyden, separating AOT cache from GC implementation details, and ensuring that use of the AOT cache does not materially impact startup time, relative to previous releases.
The eleventh incubation of the Vector API introduces an API to express vector computations that reliably compile at run time to optimal vector instructions on supported CPUs. This achieves performance superior to equivalent scalar computations. The incubating Vector API dates back to JDK 16, which arrived in March 2021. The API is intended to be clear and concise, to be platform-agnostic, to have reliable compilation and performance on x64 and AArch64 CPUs, and to offer graceful degradation. The long-term goal of the Vector API is to leverage Project Valhalla enhancements to the Java object model.
Also on the docket for JDK 26 is another preview of an API for lazy constants, which had been previewed in JDK 25 via a stable values capability. Lazy constants are objects that hold unmodifiable data and are treated as true constants by the JVM, enabling the same performance optimizations enabled by declaring a field final. Lazy constants offer greater flexibility as to the timing of initialization.
The second preview of PEM (privacy-enhanced mail) encodings calls for an API for encoding objects that represent cryptographic keys, certificates, and certificate revocation lists into the PEM transport format, and for decoding from that format back into objects. The PEM API was proposed as a preview feature in JDK 25. The second preview features a number of changes, such as the PEMRecord class is now named PEM and now includes a decode()method that returns the decoded Base64 content. Also, the encryptKey methods of the EncryptedPrivateKeyInfo class now are named encrypt and now accept DEREncodable objects rather than PrivateKey objects, enabling the encryption of KeyPair and PKCS8EncodedKeySpec objects.
The structured concurrency API simplifies concurrent programming by treating groups of related tasks running in different threads as single units of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability. Goals include promoting a style of concurrent programming that can eliminate common risks arising from cancellation and shutdown, such as thread leaks and cancellation delays, and improving the observability of concurrent code.
New warnings about uses of deep reflection to mutate final fields are intended to prepare developers for a future release that ensures integrity by default by restricting final field mutation, in other words making final mean final, which will make Java programs safer and potentially faster. Application developers can avoid both current warnings and future restrictions by selectively enabling the ability to mutate final fields where essential.
The G1 GC proposal is intended to improve application throughput when using the G1 garbage collector by reducing the amount of synchronization required between application threads and GC threads. Goals include reducing the G1 garbage collector’s synchronization overhead, reducing the size of the injected code for G1’s write barriers, and maintaining the overall architecture of G1, with no changes to user interaction.
The G1 GC proposal notes that although G1, which is the default garbage collector of the HotSpot JVM, is designed to balance latency and throughput, achieving this balance sometimes impacts application performance adversely compared to throughput-oriented garbage collectors such as the Parallel and Serial collectors:
Relative to Parallel, G1 performs more of its work concurrently with the application, reducing the duration of GC pauses and thus improving latency. Unavoidably, this means that application threads must share the CPU with GC threads, and coordinate with them. This synchronization both lowers throughput and increases latency.
The HTTP/3 proposal calls for allowing Java libraries and applications to interact with HTTP/3 servers with minimal code changes. Goals include updating the HTTP Client API to send and receive HTTP/3 requests and responses; requiring only minor changes to the HTTP Client API and Java application code; and allowing developers to opt in to HTTP/3 as opposed to changing the default protocol version from HTTP/2 to HTTP/3.
HTTP/3 is considered a major version of the HTTP (Hypertext Transfer Protocol) data communications protocol for the web. Version 3 was built on the IETF QUIC (Quick UDP Internet Connections) transport protocol, which emphasizes flow-controlled streams, low-latency connection establishment, network path migration, and security among its capabilities.
Removal of the Java Applet API, now considered obsolete, is also targeted for JDK 26. The Applet API was deprecated for removal in JDK 17 in 2021. The API is obsolete because neither recent JDK releases nor current web browsers support applets, according to the proposal. There is no reason to keep the unused and unusable API, the proposal states.
Python is slipping in popularity – Tiobe 9 Feb 2026, 12:13 pm
Python still holds the top ranking in the monthly Tiobe index of programming language popularity, leading by more than 10 percentage points over second-place C. But Python’s popularity actually has declined over the past six months, from a high market share of 26.98% last July to 21.81% in this month’s Tiobe index.
The shift, Tiobe CEO Paul Jansen said, suggests that several more specialized or domain-specific languages are gradually gaining ground at Python’s expense, most notably R and Perl.
R, a programming language for statistical computing, has long been a direct competitor to Python in the field of data science, Jansen said. R is ranked eighth this month with a 2.19% rating, but was ranked 15th one year ago. While Python overtook R in recent years, R appears to be regaining momentum and has re-entered the Tiobe index top 10 for several consecutive months, Jansen said.
At the same time, Perl has returned to prominence in the scripting realm. Once the undisputed leader in scripting, Perl declined after years of internal fragmentation and competition from newer languages, Jansen said. Recently, however, the language has staged a comeback, rising to 10th in the Tiobe index in September 2025 from 27th a year earlier. Perl ranks 11th in the index this month, with a 1.67% rating, having risen from 30th this time last year.
The monthly Tiobe Programming Community index serves as an indicator of the popularity of programming languages. Ratings are based on a formula that assesses the number of skilled engineers worldwide, courses, and third-party vendors pertinent to a language. Ratings are calculated by examining websites including Google, Amazon, Wikipedia, Bing, and more than 20 others.
The Tiobe index top 10 for February 26:
- Python, 21.81%
- C, 11.05%
- C++, 8.55%
- Java, 8.12%
- C#, 6.83%
- JavaScript, 2.92%
- Visual Basic, 2.85%
- R, 2.19%
- SQL, 1.93%
- Delphi/Object Pascal, 1.88%
The rival Pypl Popularity of Programming Language index assesses language popularity by analyzing how often language tutorials are searched on in Google.
The Pypl index top 10 for February 2026:
Salesforce may be prepping to phase out Heroku 9 Feb 2026, 5:11 am
Salesforce has signaled a major strategic shift for its long-standing cloud platform Heroku by ending sales of new Heroku Enterprise contracts and moving the service into a maintenance-focused “sustaining engineering” phase.
“Today, Heroku is transitioning to a sustaining engineering model focused on stability, security, reliability, and support…. Enterprise Account contracts will no longer be offered to new customers,” Nitin T Bhat, chief product officer at Heroku, wrote in a blog post.
[ Related: More Salesforce news and insights ]
Analysts are reading the change in Heroku’s status as preparations for phasing out the once high-profile platform-as-a-service (PaaS) as the company pivots to AI-led growth.
Sustaining engineering, according to Greyhound Research chief analyst Sanchit Vir Gogia, is rarely a stable equilibrium, rather a holding pattern that makes eventual absorption or shutdown less disruptive for the parent company, in this case, Salesforce.
When a product or platform enters the sustaining engineering phase, engineering focus shifts from building new value to containing risk, momentum fades, product, sales, partnerships, and talent move elsewhere, and in fast-evolving cloud ecosystems, the platform steadily loses relevance for developer workflows, and when a platform loses internal political capital, reversal is uncommon, Gogia added.
Similar cadences, historically, Pareekh Jain, principal analyst at Pareekh Consulting, noted, have preceded managed decline scenarios across the industry.
“There are many well-known precedents where vendors moved products into sustained engineering. IBM shifted Bluemix into maintenance as it pivoted decisively to Red Hat OpenShift, while VMware placed Pivotal Cloud Foundry into sustain mode before fully absorbing it into Tanzu. Google App Engine’s standard environment remains technically available, but innovation stalled once Google reoriented around Kubernetes and GKE,” Jain said.
Giving more examples, Jain highlighted Microsoft Silverlight, which spent years in a “supported but frozen” state before eventual retirement, along with Oracle’s Solaris, Atlassian’s Fisheye, and Adobe’s Flash — all of which were left behind or stalled for strategic relevance.
Why did Heroku lose relevance?
Heroku’s struggle to keep pace with new requirements of cloud platforms, such as competitive landscape and cost economics, appears to be the main reason behind its reprioritization.
Two structural challenges stand out, according to Chandrika Dutt, research director at Avasant.
First, the competitive landscape now includes alternatives like Render, Railway, Fly.io, Vercel, and Supabase that are more nimble, modular, and cost-effective for modern development patterns.
Second, the underlying Postgres ecosystem has broadened dramatically, with specialized hosted Postgres and backend services reducing the value of Heroku’s integrated stack.
“The combination of declining innovation, rising relative cost, and growing opportunity cost of engineering resources likely informed Salesforce’s decision to shift investment toward higher-growth priorities, including AI-centric services and broader cloud integrations,” Dutt pointed out.
In contrast, when Salesforce bought Heroku in 2011, it provided advantages, such as easy application deployment and hosted Postgres, which Salesforce used as a route to capture developer-led cloud workloads and extend its ecosystem, Dutt added.
Greyhound’s Gogia, too, seconded Dutt: Heroku once served as Salesforce’s bridge to the broader developer ecosystem but later Salesforce outgrew the problem Heroku was built to solve and strategically shifted its focus from attracting developers to controlling and monetizing enterprise AI outcomes.
As the company shifted toward large enterprise deals, platform consolidation, and AI-led differentiation, Heroku’s positioning became increasingly unclear — too independent to be tightly integrated into Salesforce, yet too branded to remain a neutral developer platform, Gogia said.
Efforts to “enterprise-grade” it improved compliance and networking capabilities but diluted its original developer appeal, and later modernization could not reverse the ecosystem drift, Gogia noted, adding that by the time Salesforce centered its narrative on AI and data platforms, Heroku was no longer core to the story.
Heroku’s future
However, the sustaining engineering phase, by no means, Salesforce said, is an immediate wind-down.
“There is no change for customers using Heroku today. Customers who pay via credit card in the Heroku dashboard—both existing and new—can continue to use Heroku with no changes to pricing, billing, service, or day-to-day usage,” Bhat wrote in his blog post, assuring customers of continued support and even encouraging customers to renew their contracts.
Despite those assurances, analysts suggest the shift warrants closer scrutiny from enterprise customers planning long-term roadmaps.
The absence of strategic investment materially increases long-term platform risk, Dutt noted, adding that customers should avoid net-new strategic development on Heroku as the signals are consistent with a platform entering a sunset trajectory, not a renewal phase.
Gogia, too, warned that CIOs should start treating Heroku as legacy infrastructure.
“Do not assume runtime parity with the broader ecosystem will persist indefinitely. Inventory dependencies, especially data services and integration points. Ensure backups and exports are routine, not aspirational,” Gogia said.
“Prototype at least one viable alternative deployment path so migration effort is understood, not guessed. The biggest risk is not an abrupt shutdown. The biggest risk is complacency, where teams wake up one day to discover that the cost, effort, and organizational friction of moving has grown far larger than it needed to be,” Gogia added.
Similarly, Dutt pointed out that enterprises that start planning a migration will have an advantage as operational leverage will still be with them than any vendor or Salesforce till the time Heroku’s sunset is announced. From a Salesforce perspective, Jain said, Heroku’s Postgres, which seems to be its most durable asset, is likely to be absorbed into its Data Cloud as a managed offering and the dyno-based compute layer will be phased out as Salesforce steers AI and agentic development through a combo of Agentforce and the Data Cloud.
AI-augmented data quality engineering 9 Feb 2026, 2:00 am
Why traditional data quality is no longer enough
Modern enterprise data platforms operate at a petabyte scale, ingest fully unstructured sources, and evolve constantly. In such environments, rule-based data quality systems fail to keep pace. They depend on manual constraint definitions that do not generalize to messy, high-dimensional, fast-changing data.
This is where AI-augmented data quality engineering emerges. It shifts data quality from deterministic, Boolean checks to probabilistic, generative, and self-learning systems.
AI-driven DQ frameworks use:
- Deep learning for semantic inference
- Transformers for ontology alignment
- GANs and VAEs for anomaly detection
- LLMs for automated repair
- Reinforcement learning to continuously assess and update trust scores
The result is a self-healing data ecosystem that adapts to concept drift and scales alongside growing enterprise complexity.
Automated semantic inference: Understanding data without rules
Traditional schema inference tools rely on simple pattern matching. But modern datasets contain ambiguous headers, mixed-value formats, and incomplete metadata. Deep learning models solve this by learning latent semantic representations.
Sherlock: Multi-input deep learning for column classification
Sherlock, developed at MIT, analyzes more than 1,588 statistical, lexical, and embedding features to classify columns into semantic types with extremely high accuracy.
Sherlock does not rely on rules like “five digits = ZIP code.” Instead, it examines distribution patterns, character entropy, word embeddings, and contextual behavior to classify fields such as:
- ZIP code or employee ID
- Price or age
- Country or city
This dramatically improves accuracy when column names are missing or misleading.
Sato: Context-aware semantic typing using table-level intelligence
Sato extends Sherlock by incorporating context across the full table. It uses topic modeling, context vectors, and structured prediction (CRF) to understand relationships between columns.
This allows Sato to differentiate between:
- A person’s name in HR data
- A city name in demographic data
- A product name in retail data
Sato improves macro-average F1 by roughly 14 percent over Sherlock in noisy environments and works well in data lakes and uncurated ingestion pipelines.
Ontology alignment using transformers
Large organizations manage dozens of schemas across different systems. Manual mapping is slow and inconsistent. Transformer-based models fix this by understanding deep semantic relationships inside schema descriptions.
BERTMap: Transformer-based schema and ontology alignment
BERTMap(AAAI) fine-tunes BERT on ontology text structures and produces consistent mappings even when labels differ entirely.
Examples include:
- “Cust_ID” mapped to “ClientIdentifier”
- “DOB” mapped to “BirthDate”
- “Acct_Num” mapped to “AccountNumber”
It also incorporates logic-based consistency checks that remove mappings that violate established ontology rules.
AI-driven ontology alignment increases interoperability and reduces the need for manual data engineering.
Generative AI for data cleaning, repair and imputation
Generative AI allows automated remediation and not just detection. Instead of engineers writing correction rules, AI learns how the data should behave.
Jellyfish: LLM fine-tuned for data preprocessing
Jellyfish is an instruction-tuned LLM created for data cleaning and transformation tasks such as:
- Error detection
- Missing-value imputation
- Data normalization
- Schema restructuring
Its knowledge injection mechanism reduces hallucinations by integrating domain constraints during inference.
Enterprise teams use Jellyfish to improve consistency in data processing and reduce manual cleanup time.
ReClean: Reinforcement learning for cleaning sequence optimization
Cleaning pipelines often apply steps in an inefficient order. ReClean frames this as a sequential decision process where an RL agent decides the optimal next cleaning action. The agent receives rewards based on downstream ML performance rather than arbitrary quality rules LIME and SHAP tutorial used in ReClean evaluation.
This ensures that data cleaning directly supports business outcomes.
4. Deep generative models for anomaly detection
Statistical anomaly detection methods fail with high-dimensional and non-linear data. Deep generative models learn the true shape of the data distribution and can measure deviations with greater accuracy.
GAN-based anomaly detection: AnoGAN and DriftGAN
GANs learn what “normal” looks like. During inference:
- High reconstruction error indicates an anomaly.
- Low discriminator confidence also indicates an anomaly.
AnoGAN pioneered this technique, while DriftGAN detects changes that signal concept drift, allowing systems to adapt over time.
Generative Adversarial Networks (GANs) are commonly applied across areas such as fraud detection, financial analysis, cybersecurity, IoT monitoring, and industrial analytics.
Variational autoencoders (VAEs) for probabilistic imputation
VAEs encode data into latent probability distributions, allowing:
- Advanced missing value imputation
- Quantification of uncertainty
Effective handling of Missing Not At Random (MNAR) scenarios
Advanced versions such as MIWAE and JAMIE provide high-accuracy imputation even in multimodal data.
This leads to significantly more reliable downstream machine learning models.
5. Building a dynamic AI-driven data trust score
A Data Trust Score quantifies dataset reliability using a weighted combination of:
- Validity
- Completeness
- Consistency
- Freshness
- Lineage
Formula example
Trust(t) = ( Σ wi·Di + wL·Lineage(L) + wF·Freshness(t) ) / Σ wi
Where:
- Di represents intrinsic quality dimensions
- Lineage(L) represents upstream quality
- Freshness(t) models data staleness using exponential decay
Freshness decay and lineage propagation
Freshness loses value naturally as data ages.
Lineage ensures a dataset cannot appear more reliable than its inputs.
These concepts are foundational to the Data Trust Score overview and align closely with Data Mesh governance principles. Trust scoring creates measurable, auditable data health indicators.
Contextual bandits for dynamic trust weighting
Different applications prioritize different quality attributes.
Examples:
- Dashboards prioritize freshness
- Compliance teams prioritize completeness
- AI models prioritize consistency and anomaly reduction
Contextual bandits optimize trust scoring weights based on usage patterns, feedback, and downstream performance.
Explainability: Making AI-driven data quality auditable
Enterprises must understand why AI flags or corrects a record. Explainability ensures transparency and compliance.
SHAP for feature attribution
SHAP quantifies each feature’s contribution to a model prediction, enabling:
- Root-cause analysis
- Bias detection
- Detailed anomaly interpretation
LIME for local interpretability
LIME builds simple local models around a prediction to show how small changes influence outcomes. It answers questions like:
- “Would correcting age change the anomaly score?”
- “Would adjusting the ZIP code affect classification?”
Explainability makes AI-based data remediation acceptable in regulated industries.
More reliable systems, less human intervention
AI augmented data quality engineering transforms traditional manual checks into intelligent, automated workflows. By integrating semantic inference, ontology alignment, generative models, anomaly detection frameworks and dynamic trust scoring, organizations create systems that are more reliable, less dependent on human intervention, and better aligned with operational and analytics needs. This evolution is essential for the next generation of data-driven enterprises.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Is AI killing open source? 9 Feb 2026, 1:00 am
Open source has never been about a sprawling community of contributors. Not in the way we’ve imagined it, anyway. Most of the software we all depend on is maintained by a tiny core of people, often just one or two, doing unpaid work that companies use as essential infrastructure, as recently covered by Brookings research.
That mismatch worked, if uncomfortably, when contributing had friction. After all, you had to care enough to reproduce a bug, understand the codebase, and risk looking dumb in public. But AI agents are obliterating that friction (and they have no problem with looking dumb). Even Mitchell Hashimoto, the founder of HashiCorp and open source royalty, is now considering closing external PRs to his open source projects completely. Not because he’s losing faith in open source, but because he’s drowning in “slop PRs” generated by large language models and their AI agent henchmen.
This is the “agent psychosis” that Flask creator Armin Ronacher laments. Ronacher describes a state where developers become addicted to the dopamine hit of agentic coding and spin up agents to run wild through their own projects and, eventually, through everyone else’s. The result is a massive degradation of quality. These pull requests are often vibe-slop: code that feels right because it was generated by a statistical model but lacks the context, the trade-offs, and the historical understanding that a human maintainer brings to the table.
It’s going to get worse.
As SemiAnalysis recently noted, we have moved past simple chat interfaces into the era of agentic tools that live in the terminal. Claude Code can research a codebase, execute commands, and submit pull requests autonomously. This is a massive productivity gain for a developer working on their own project and a nightmare for the maintainer of a popular repository. The barrier to producing a plausible patch has collapsed, but the barrier to responsibly merging it has not.
This leads me to wonder if we’ll end up in a world where the best open source projects become those that are hardest to contribute to.
The cost of contribution
Let’s look at the economics driving this pattern change. The problem is the brutal asymmetry of review economics. It takes a developer 60 seconds to prompt an agent to fix typos and optimize loops across a dozen files. But it takes a maintainer an hour to carefully review those changes, verify they do not break obscure edge cases, and ensure they align with the project’s long-term vision. When you multiply that by a hundred contributors all using their personal LLM assistants to help, you don’t get a better project. You get a maintainer who just walks away.
In the old days, a developer might find a bug, fix it, and submit a pull request as a way of saying thank you. It was a human transaction. Now that transaction has been automated, and the thank you has been replaced by a mountain of digital noise. The OCaml community recently faced a vivid example of this when maintainers rejected an AI-generated pull request containing more than 13,000 lines of code. They cited copyright concerns, lack of review resources, and the long-term maintenance burden. One maintainer warned that such low-effort submissions create a real risk of bringing the pull request system to a halt.
Even GitHub is feeling this at platform scale. As my InfoWorld colleague Anirban Ghoshal reported, GitHub is exploring tighter pull request controls and even UI-level deletion options because maintainers are overwhelmed by AI-generated submissions. If the host of the world’s largest code forge is exploring a kill switch for pull requests, we are no longer talking about a niche annoyance. We are talking about a structural shift in how open source gets made.
This shift is hitting small open source projects the hardest. Nolan Lawson recently explored this in a piece titled “The Fate of ‘Small’ Open Source.” Lawson is the author of blob-util, a library with millions of downloads that helps developers work with Blobs in JavaScript. For a decade, blob-util was a staple because it was easier to install the library than to write the utility functions yourself. But in the age of Claude and GPT-5, why would you take on a dependency? You can simply ask your AI to write a utility function, and it will spit out a perfectly serviceable snippet in milliseconds. Lawson’s point is that the era of the small, low-value utility library is over. AI has made them obsolete. If an LLM can generate the code on command, the incentive to maintain a dedicated library for it vanishes.
Build it, don’t borrow it
Something deeper is being lost here. These libraries were educational tools where developers learned how to solve problems by reading the work of others. When we replace those libraries with ephemeral, AI-generated snippets, we lose the teaching mentality that Lawson believes is the heart of open source. We are trading understanding for instant answers.
This leads to Ronacher’s other provocation from a year ago: the idea that we should just build it ourselves. He suggests that if pulling in a dependency means dealing with constant churn, the logical response is to retreat. He suggests a vibe shift toward fewer dependencies and more self-reliance. Use the AI to help you, in other words, but keep the code inside your own walls. This is a weird irony: AI may reduce demand for small libraries while simultaneously increasing the volume of low-quality contributions into the libraries that remain.
All of this prompts a question: If open source is not primarily powered by mass contribution, what does it mean when the contribution channel becomes hostile to maintainers?
It likely leads us to a state of bifurcation. On one side, we’ll have massive, enterprise-backed projects like Linux or Kubernetes. These are the cathedrals, the bourgeoisie, and they’re increasingly guarded by sophisticated gates. They have the resources to build their own AI-filtering tools and the organizational weight to ignore the noise. On the other side, we have more “provincial” open source projects—the proletariat, if you will. These are projects run by individuals or small cores who have simply stopped accepting contributions from the outside.
The irony is that AI was supposed to make open source more accessible, and it has. Sort of. But in lowering the barrier, it has also lowered the value. When everyone can contribute, nobody’s contribution is special. When code is a commodity produced by a machine, the only thing that remains scarce is the human judgment required to say no.
The future of open source
Open source isn’t dying, but the “open” part is being redefined. We’re moving away from the era of radical transparency, of “anyone can contribute,” and heading toward an era of radical curation. The future of open source, in short, may belong to the few, not the many. Yes, open source’s “community” was always a bit of a lie, but AI has finally made the lie unsustainable. We’re returning to a world where the only people who matter are the ones who actually write the code, not the ones who prompt a machine to do it for them. The era of the drive-by contributor is being replaced by an era of the verified human.
In this new world, the most successful open source projects will be the ones that are the most difficult to contribute to. They will demand a high level of human effort, human context, and human relationship. They will reject the slop loops and the agentic psychosis in favor of slow, deliberate, and deeply personal development. The bazaar was a fun idea while it lasted, but it couldn’t survive the arrival of the robots. The future of open source is smaller, quieter, and much more exclusive. That might be the only way it survives.
In sum, we don’t need more code; we need more care. Care for the humans who shepherd the communities and create code that will endure beyond a simple prompt.
The devops certifications tech companies want 9 Feb 2026, 1:00 am
Devops continues to expand in development environments everywhere from small startups to the largest global enterprises. The worldwide devops market, including products and services, increased from $10.56 billion in 2023 to $12.4 billion in 2024, according to The Business Research Company. The firm predicts the market will expand to $37.33 billion by 2029.
As increasingly complex IT infrastructure makes IT processes more complicated, devops becomes more essential for its ability to move complex management processes out of human hands with the help of automation. “Organizations across sectors are under sustained pressure to deliver software faster, more reliably, and at greater scale, while also managing increasingly complex cloud and hybrid environments,” says Tasha Jones, creator of Espire Collective, a talent marketplace. “Devops practices sit at the center of that challenge.”
The demand isn’t just about volume but about proven abilities, Jones adds. “There is a nuanced imbalance between the number of people in the market and the availability of experienced practitioners who can design, scale, and govern devops systems effectively,” she says. “Certifications are one way employers try to reduce uncertainty, especially as devops responsibilities expand beyond tooling into reliability, security, and operational resilience.”
There’s clearly increasing demand for devops certifications, “although it is driven more by organizational pressure than by engineering culture,” says Ashley Ward, principal solutions architect at Minimus, a provider of security testing software.
“As devops practices spread into larger, regulated, and less digitally native organizations, leaders need defensible ways to assess skills at scale,” Ward says. “The growth in devops certifications is being driven less by engineers and more by organizational reality.”
Also see: 10 top devops practices no one is talking about.
Also driving the demand for devops certification is the need to get products out to users more quickly and efficiently, with attention to security and data privacy concerns. “In my work building and running production systems, I’ve seen clear growth in demand for devops certifications,” says Sanjeev Kumar, founder of OurNetHelps, which provides digital tools for professionals and students worldwide.
“Companies are under constant pressure to ship faster, keep systems reliable, and scale infrastructure without growing large engineering teams,” Kumar says. “Cloud-native platforms, microservices, and automated deployment pipelines are now standard, and organizations need engineers who can work comfortably in these environments.”
Not surprisingly, the devops certifications most in demand today are cloud and platform-focused, Kumar says.
Certifications and the hiring process
Devops certifications definitely come into play in the hiring process, although they do not replace the need for hands-on experience with devops practices.
“Certifications provide a simple way to signal baseline competence and reduce perceived hiring risk, particularly in environments where decision makers cannot deeply assess every candidate’s technical background,” Ward says.
Increased scrutiny around security, reliability, and cloud cost control has also played a role, he says. “Certifications are often used as a risk reduction mechanism rather than a marker of excellence,” he says. “Even when leaders understand that certification alone is not sufficient, it helps them move forward with confidence in complex hiring environments.”
Devops certifications tend to matter most at the earliest stages of hiring, before the first technical interview, Ward says. “They help candidates pass initial CV screening, particularly when that screening is done by HR or generalist recruiters rather than senior engineers,” he says. “Arranging interviews with multiple senior technical leaders is expensive, not just in the interview itself, but in preparation, evaluation, and follow up. Anything that improves early filtering saves real money for large organizations.”
While certifications rarely influence final hiring decisions on their own, they can materially improve a candidate’s chance of getting an interview, especially for junior and mid-level positions, Ward says. “A recognized Kubernetes or cloud certification doesn’t make someone a great engineer, but it does give hiring teams confidence that the candidate has been exposed to modern delivery patterns,” he says.
Candidates with certifications such as AWS Certified Devops Engineer and Kubernetes Administrator are more attractive, indicating that they have hands-on skills rather than just theoretical learning, says Joshua Haghani, founder and CEO of software provider Lumion.
Certifications provide evidence that a job candidate knows how to build continuous integration/continuous delivery (CI/CD) pipelines and deploy cloud environments at scale, Haghani says.
“Devops certifications offer an organized approach to learning within a dense field, and graduates will have had hands-on experience with tools like Docker, Jenkins, or Terraform,” he says. “Firms do not want to invest in candidates who take a long time before contributing.”
Certifications don’t replace hands-on experience, but they do make candidates more attractive, Kumar says. “A relevant devops certification shows that an engineer understands modern deployment workflows, cloud infrastructure, automation, and reliability practices,” he says. For employers, that reduces hiring risk because these skills directly affect uptime, security, and delivery speed, he says.
Benefits of devops certifications
Aside from helping software developers and engineers land jobs, devops certifications can deliver other benefits.
“The primary benefit of devops certifications is structured learning and the creation of a shared language,” Ward says. “Strong certifications require practitioners to understand not just tools, but underlying principles such as automation, resilience, security, and systems thinking.”
The biggest value of devops certifications is not the credential itself, but the common language they create across teams, Ward says. “From an individual’s perspective, certifications reduce friction in the hiring process and can make career progression easier,” he says. “Certified candidates often move through early hiring stages more smoothly.”
Also see: From devops to CTO: 8 things to start doing now.
From a manager’s perspective, certifications help teams communicate more effectively because assumptions and terminology are already aligned, Ward says. “That shared foundation becomes especially valuable when engineering teams work closely with security, compliance, or audit functions,” he says. “In practice, organizations often use certification-backed best practices to justify improvements such as adopting infrastructure as code, strengthening CI/CD controls, or improving software supply chain security.”
One of the most tangible benefits of certification is access to certain jobs, Jones says. “In regulated or compliance-driven environments, such as parts of the public sector, specific certifications are often required simply to be considered,” she says. “Even highly experienced professionals may not pass initial screening without them.”
More broadly, certifications signal a commitment to continuous learning, Jones says. “Devops tooling and best practices evolve quickly, and leaders tend to value professionals who demonstrate they are staying current rather than relying solely on past experience,” she says. “This matters to both public- and private-sector organizations, where outdated approaches to automation or cloud operations can quickly become operational risks.”
Certifications tied to particular platforms or services are especially valuable. “As devops has become largely inseparable from cloud platforms, certifications from providers such as Microsoft Azure and Amazon Web Services remain highly sought after,” Ward says.
From a technical credibility standpoint, Cloud Native Computing Foundation (CNCF) certifications are particularly well regarded, Ward says. “They reflect real-world cloud-native operations rather than purely theoretical knowledge,” he says.
While cloud certifications are not interchangeable, organizations generally find that skills transfer between platforms with standard onboarding. “If someone understands cloud fundamentals in one platform, most organizations are confident they can adapt to another with the right support,” Ward says.
One of the biggest benefits of devops certifications is the structured way they teach engineers to think about systems, Kumar says. “They cover CI/CD pipelines, infrastructure-as-code, monitoring, security, and operations as one connected workflow,” he says. “That systems mindset is far more valuable than knowing a single tool in isolation.”
Popular devops certifications
The value of a devops certification stems from the range of skills and platforms covered. The following are among the most in demand according to experts.
AWS Certified Devops Engineer – Professional
This and similar cloud-focused certifications have gained considerable popularity thanks to continued growth in the use of cloud services as well as the broad adoption of cloud-native environments. The AWS Certified Devops Engineer – Professional certification demonstrates technical expertise in areas such as provisioning, operating, and managing distributed application systems on the AWS platform, according to Amazon Web Services.
Certified Kubernetes Administrator (CKA)
The CKA program was created by the Cloud Native Computing Foundation (CNCF) and Linux Foundation as a part of an ongoing effort to further develop the Kubernetes ecosystem. The purpose of the program is to show that individuals who earn CKA certification have the skills, knowledge, and ability to perform the responsibilities of Kubernetes administrators, according to the Cloud Native Computing Foundation.
Google Professional Cloud Devops Engineer
The Google Professional Cloud Devops Engineer certification remains in demand because of the growth of cloud and cloud-native development environments. It demonstrates an ability to deploy processes and capabilities throughout the development lifecycle using Google-recommended methodologies and tools, Google Cloud says. These professionals enable efficient software and infrastructure delivery, while also balancing reliability and speed of delivery speed.
Microsoft Certified Azure Devops Engineer Expert
The Microsoft Certified Azure Devops Engineer Expert certification demonstrates the ability to use Microsoft devops tools that provide continuous security, integration, testing, delivery, deployment, monitoring, and feedback. Professionals design and implement flow of work, collaboration, communication, source control, and automation. Skills earned upon completion include implementation of continuous integration, design of a release strategy with Azure and GitHub, and management of infrastructure as code with Azure, according to Microsoft.
Microsoft bumps .NET Framework 3.5 from Windows installers 6 Feb 2026, 1:53 pm
Microsoft’s .NET Framework 3.5 development platform, which dates back to November 2007, is no longer included as an optional Windows component. Microsoft has changed its deployment model to standalone installer status for future Windows versions.
In a bulletin published February 5, Microsoft said that beginning with Windows 11 Insider Preview Build 27965, .NET Framework 3.5 must be obtained as a standalone installer for applications that require it on newer major versions of Windows. The change also applies to future platform releases of Windows but does not affect Windows 10 or earlier Windows 10 releases through 25H2.
Microsoft said the change for .NET Framework 3.5 aligns with the product’s life cycle, as .NET Framework 3.5 approaches its end of support on January 9, 2029. Customers are encouraged to begin planning migrations to newer, supported versions of .NET. Guidance including installers, compatibility notes, and recommended migration paths has been published on Microsoft Learn for users with applications that depend on .NET Framework 3.5.
Claude AI finds 500 high-severity software vulnerabilities 6 Feb 2026, 8:28 am
Anthropic only released its latest large language model, Claude Opus 4.6, on Thursday, but it has already been using it behind the scenes to identify zero-day vulnerabilities in open-source software.
In the trial, it put Claude inside a virtual machine with access to the latest versions of open source projects, and provided it with a range of standard utilities and vulnerability analysis tools, but no instructions on how to use them nor how specifically to identify vulnerabilities.
Despite this lack of guidance, Opus 4.6 managed to identify a 500 high-severity vulnerabilities. Anthropic staff are validating the findings before reporting the bugs to their developers to ensure the LLM was not hallucinating or reporting false positives, according to company blog post.
“AI language models are already capable of identifying novel vulnerabilities, and may soon exceed the speed and scale of even expert human researchers,” it said.
Anthropic may be keen to improve its reputation in the software security industry, given how its software has already been used to automate attacks.
Other companies are already using AI to handle bug hunting and this is further evidence of the possibilities.
But some software developers are overwhelmed by the number of poor-quality AI-generated bug reports, with at least one shutting its bug-bounty program because of abuse by AI-accelerated bug hunters.
This article originally appeared on CSOonline.com.
Page processed in 0.055 seconds.
Powered by SimplePie 1.3, Build 20180209064251. Run the SimplePie Compatibility Test. SimplePie is © 2004–2026, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.
