Was data mesh just a fad? | InfoWorld

Technology insight for the enterprise

Boring governance is the path to real AI adoption 3 Nov 2025, 1:00 am

Three big cloud vendors announced earnings recently, with each accelerating their growth thanks to AI. Nvidia, for its part, became the first company to top $5 trillion in market cap, also thanks to AI. While there’s almost certainly some “irrational exuberance” baked into AI adoption, spurred by FOMO across global enterprises, the reality is that we’re nowhere near AI saturation.

Here’s why. AI has yet to touch mainstream applications at mainstream enterprises, and it won’t until it solves some critical (and boring) issues like security. As I’ve noted, “We may like the term ‘vibe coding,’ but smart developers are forcing the rigor of unit tests, traces, and health checks for agent plans, tools, and memory.” They’re focused on “boring” because it’s key to real, sustainable enterprise adoption.

Getting past buzzwords

As an industry, we sometimes like to pretend that buzzwords drive adoption. “Open always wins,” declares Vercel founder Guillermo Rauch. He’s obviously wrong, as even a cursory history of technology adoption shows. There are some obvious success stories for open software (Linux, the Apache HTTP Server, etc.), but there are far more examples of closed systems winning. That’s not to say one is better than the other, but simply to point out how our casual indifference to how enterprise adoption actually works can blind us to the hard work necessary to drive real adoption.

The same is true of AI in the enterprise. You’ll hear folks like Abacus AI CEO Bindu Reddy slagging enterprise AI adoption, faulting air-quoted “security” concerns (as if they’re not real) and “AI committees” that are “stuck in analysis paralysis.” Sure. But folks with experience in the enterprise realize that, as Spring framework creator Rod Johnson put it, “Startups can risk building houses of straw. Banks can’t.” Yes, there’s enterprise bureaucracy, he acknowledges, but that’s partly because “security is a real thing,” not to mention privacy and regulation.

Smaller companies can pretend such things aren’t important, but that’s why they get stuck in early-stage proofs of concept and rarely hit mainstream production deployments.

Enthusiasm meets governance

Wharton’s 2025 AI Adoption Report is a good antidote to the “just go fast” mantra. The study—based on 800 enterprise decision-makers—found that “at least eight out of 10” use generative AI regularly today, up from “less than four out of 10” in 2023. Wow, right? Maybe, but fast adoption isn’t the same as safe deployment. The same report shows adoption leadership consolidating in the C-suite (60% of companies in the survey have a chief AI officer), with policies emphasizing data privacy, ethical use, and human oversight—the unsexy guardrails you need before you plug AI into real workflows.

Importantly, Wharton also highlights that “as genAI becomes everyday work, the constraint shifts from tools to people,” and that training, trust, and change management become decisive. That squares with what I argued recently: AI’s biggest supply-chain shortage isn’t GPUs; it’s people who know how to wield AI safely inside the business.

If you want a case study from a previous wave of “disruptive” tech, look no further than Kubernetes (appropriate given KubeCon is this week). Kubernetes didn’t become mainstream because it was cool. It became an enterprise standard when managed offerings normalized security and policy (and therefore governance), making it easier to operate in regulated environments. The Cloud Native Computing Foundation’s 2023/2024 surveys repeatedly show that applying policies consistently across cost, reliability, and security is a top concern. Again, boring governance is the path to real adoption.

How fast can we get to governed data?

Here’s how I say it in my day job running developer relations at Oracle: Although developers have long privileged convenience over most other considerations, AI starts to shift selection criteria from “spin up fast” to “get to governed data fast.” That favors technology stacks where your security controls, lineage, masking, and auditing already live next to your data. Spinning up a shiny model endpoint is trivial; connecting it safely to customer records laden with personally identifiable information, payment histories, and invoices is not. What does this mean?

  • Data proximity beats tool novelty. Moving copies of sensitive data into new systems multiplies both risk and cost. Retrieval-augmented generation (RAG) that keeps data in-place, where encryption, role-based access controls (RBAC), and masking policies already apply, will beat RAG that shuttles CSVs to an unfamiliar vector store, no matter how “developer-friendly” it seems.
  • Policy reuse is the killer feature. If your platform lets you reuse existing row/column-level policies, data loss prevention rules, and data-residency controls for prompts, embeddings, and tool use—without writing glue code—that offers enormous leverage. Wharton’s report shows that enterprises are explicitly codifying these guardrails as they scale.
  • Human oversight requires observable AI. You can’t govern what you can’t see. Evaluation harnesses, prompt/version lineage, and structured logging of tool calls are now table stakes. That’s why teams are pushing “unit tests for prompts” and trace-level observability for agents. It’s boring but, again, it’s essential.

That may sound like a vote for legacy technology stacks, but it’s really a vote for integrated stacks. The shiniest new technology rarely wins unless it can inherit the boring controls enterprises already trust. This is the paradox of enterprise innovation.

Why ‘sexy’ loses to ‘secure’

Enterprise history keeps teaching the same lesson. When innovation collides with compliance, compliance wins—and that’s healthy. The goal isn’t to slow innovation; it’s to sustain it. Kubernetes only won once it got the guardrails. Public cloud only exploded after virtual private clouds, identity and access management, and key management services matured. Generative AI is repeating the pattern. Once security and other enterprise concerns are part of the default AI stack, adoption will move from developer excitement to earnings acceleration within the enterprise.

The headline across tech earnings calls is “AI, AI, AI.” The headline inside enterprise backlogs is “governance.” These aren’t really in conflict, except on X.

That’s why the most important performance optimization for AI in the enterprise isn’t a faster kernel or a slightly better benchmark. It’s a shorter path from idea to governed data. As I said earlier, AI is shifting selection criteria from “spin up fast” to “get to governed data fast.” The winners won’t be the stacks that look the coolest on day one. They’ll be the ones that make the boring stuff—security, privacy, compliance, observability—nearly invisible so developers can get back to building.

(image/jpeg; 9.33 MB)

What developers should know about network APIs 3 Nov 2025, 1:00 am

Telecom networks are becoming more than just infrastructure, with network API exposure turning them into smart, programmable platforms.

Developers from many fields, not just telecom, are now beginning to use these network capabilities, such as locating devices, detecting SIM swaps, running KYC (know your customer) matches, and requesting high-priority network speed through the Quality on Demand API. This is no longer science fiction, it is already happening. Programs like CAMARA and GSMA Open Gateway have made it possible.

In this article, I will discuss how giving developers access to these powerful network APIs is transforming the way applications are built, deployed, and delivered, as well as what this means for the future of software development.

From connectivity to programmability

For a long time, developers thought of telecom networks as “dumb pipes” that only connected applications and did nothing else. Now, with some of the most powerful network functionalities available through APIs, developers are starting to program them.

Let’s look at what this means for real-world development.

Location Verification API: Find out where your users really are

The Location Verification API allows applications to use telecom data in addition to GPS to confirm a user’s network-verified location. This makes it possible to:

  • Prevent banking fraud: Detect when a user’s device is in a different location from where a transaction is occurring
  • Enable context-aware services: Retail apps can display hyperlocal deals only when the user is actually in the store
  • Secure access control: Government or logistics apps can verify that someone is in a sensitive region without relying on spoofable GPS signals

In this case, trust is what matters. The user cannot falsify this information. Today, developers build applications on what the network verifies as true.

SIM Swap API: Stopping account takeovers before they happen

Cybercriminals steal mobile identities to bypass two-factor authentication (2FA) and drain bank accounts. SIM swap fraud has become a significant problem. Through the SIM Swap API, network providers allow developers to get a simple yet important signal: Has the SIM card for this user changed recently?

Using the SIM Swap API, fintech platforms can block suspicious transactions, authentication systems can trigger alternative 2FA methods, and new accounts can be protected before they are compromised. For developers, this is a major step forward in digital trust, shifting app security from reactive to proactive.

Quality on Demand API: Performance boost for network-aware apps

This is a game-changer. Using the Quality on Demand API, developers can request better network performance with low latency and steady bandwidth for a specific application session. Examples include:

  • Telemedicine apps ensuring video quality during consultations
  • Gaming apps enhancing performance during competitive matches
  • Autonomous vehicles prioritizing communication when safety is at risk

“Best-effort” networks are giving way to “fit-for-purpose” networks. Developers can now be factored into how performance is allocated.

Applications that work with the network

As developers begin interacting directly with the network, we are seeing the rise of network-native applications. Just as cloud-native apps transformed how businesses use software, network-native apps are starting to reshape industries that depend on mobility, identity, and real-time performance.

  • The telecom layer will directly provide fraud detection to banking apps
  • Location APIs will help supply chain systems verify deliveries
  • QoS (Quality of Service) APIs will allow streaming platforms to request prioritized bandwidth for live events, ensuring smooth playback

Apps that work with the network are replacing apps that merely use it.

What as changed for developers?

The key change is this: For security and compliance, network APIs are the go-to layer. No more hunting for vulnerabilities or guessing where risks might be. When you call an API, you get telco-grade validation.

Application logic becomes simpler, and business value increases. Instead of building their own fraud engines or location-spoofing detectors, developers can tap into reliable network intelligence. The use cases span industries. Whether you are in fintech, retail, logistics, healthcare, or gaming, the network is now part of your developer tool set.

Certainly, there are challenges. There are genuine risks of misuse, security breaches, privacy violations, and lack of consent. The industry has responded with:

  • Strict standardization (CAMARA and GSMA Open Gateway, for example)
  • 3rd Generation Partnership Project (3GPP) consent frameworks
  • Shared ethical principles such as Privacy by Design and Responsible AI

For this ecosystem to thrive, developers must treat these APIs with the same level of care as payment gateways or medical data.

A new superpower for programmers

I have seen developers evolve for years. The cloud gave them scale. Tools like low-code platforms and large language models brought them ease and speed. Now, network APIs are helping them make applications smarter and more aware. The future app will not only be smarter but also connected to the truth of the network.

The doors are open. The APIs are ready. A new generation of applications is emerging. They are smart, aware of their surroundings, and safe, thanks to developers who are starting to harness them. We are now building the future with the network, not just with code and the cloud.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

(image/jpeg; 7.75 MB)

Was data mesh just a fad? 3 Nov 2025, 1:00 am

When tech giants like Netflix and Intuit adopted the data mesh architecture, it looked like the next big thing. A few short years later, disillusionment has set in, with companies turning their backs on data mesh and industry experts writing it off as a fad or declaring it dead.

Amidst all of these conversations, if you are still thinking whether you should adopt the data mesh architecture and wondering if it will survive the test of time, then this post is for you.

How data mesh gained popularity

Let’s rewind a bit and see how data mesh architecture arose in the first place. In the late 2010s, many companies were doubling down on migrating data to a data lake and building workflows. Data lake architecture promised a perfect solution for companies wanting a centralized repository for all data to be used in analysis. But by the early 2020s, the limitations of the data lake system became clear, and loopholes in the approach came to the forefront.

One primary loophole was that the data lake was built and maintained by a separate engineering or analytics team, which didn’t understand the data in depth as thoroughly as the source teams. Typically, there were multiple copies or slightly modified versions of the same data floating around, along with accuracy and completeness issues. Every mistake in the data would need multiple discussions and eventually lead back to the source team to fix the problem. Any new column added to the source tables would require tweaks in the workflows of multiple teams before the data finally reached the analytics teams. These gaps between source and analytics teams led to implementation delays and even data loss. Teams began having reservations about putting their data in a centralized data lake.

Data mesh architecture promised to solve these problems. A polar opposite approach from a data lake, a data mesh gives the source team ownership of the data and the responsibility to distribute the dataset. Other teams access the data from the source system directly, rather than from a centralized data lake. The data mesh was designed to be everything that the data lake system wasn’t. No separate workflows for migration. Fewer data sanity checks. Higher accuracy, less duplication of data, and faster turnaround time on data issues. Above all, because each dataset is maintained by the team that knows it best, the consumers of the data could be much more confident in its quality.

Why users lost faith in data mesh

But the excitement around data mesh didn’t last. Many users became frustrated. Beneath the surface, almost every bottleneck between data providers and data consumers became an implementation challenge. The thing is, the data mesh approach isn’t a once-and-done change, but a long-term commitment to prepare a data schema in a certain way. Although every source team owns their dataset, they must maintain a schema that allows downstream systems to read the data, rather than replicating it. However, a general lack of training and leadership buy-in led to improper schema planning, which in turn led to multiple teams performing similar actions on the same data, resulting in duplication of data and effort and increased compute costs.

A lack of coordination across teams often led to incomplete and disconnected tables. Analytics tables that were missing columns or couldn’t be connected to each other were a disaster, putting everyone back at square one. Incomplete tables triggered the same old pattern that afflicted the data lakes, where every team began building its own layer on top of the source systems and populating required columns. Once again, teams began duplicating the datasets, defeating the whole purpose of the architecture.

So was data mesh just a fad?

No, data mesh is not a fad, nor is it the next big thing that will solve all of your data challenges. But data mesh can dramatically reduce data management overhead, and at the same time improve data quality, for many companies.

In essence, data mesh is a shift in mindset, one that completely changes the way you view data. Teams must envision data as a product, continuously showing commitment for the source team to own the data set and discouraging duplication. When developing a new feature, teams must treat analytics data requirements as first-class citizens. That is, they must design the data schemasuch that all downstream requirements are met.

For example, if you convert your manufacturing datasets into a data mesh architecture, then your purchase order table must have all the columns required by finance, marketing, procurement, shipping, and analytics. No team should require further copying of this table or have to do any transformations on top of it to make the table usable for them.

A data mesh is not the right approach for everyone. For small teams with a limited number of datasets, it still makes sense to create a centralized data lake rather than having multiple workflows for every source team. But for large enterprises with huge datasets, where multiple teams make changes to the same source datasets regularly, decentralization can be extremely effective. It makes sense for the source team to build a complete dataset itself, rather than have every team copying the table, often doing transformations on top of it. In addition to wasting compute resources, this often introduces errors in the final dataset.

Data mesh architecture reduces the number of hoops required to access data and increases data accuracy. Companies that have done data mesh right have seen excellent results. For instance, a leading bank implemented a data mesh and saw a 45% reduction in the time taken to complete operational activities. If your company has the right use case and the right mindset, a data mesh could unlock easier access to higher-quality data for your analytics teams, allowing them to achieve better results with far less effort.

(image/jpeg; 1.39 MB)

Will JavaFX return to Java? 31 Oct 2025, 3:41 pm

Just as a proposal to return JavaFX to the Java Development Kit has drawn interest in the OpenJDK community, Oracle too says it wants to make the Java-based rich client application more approachable within the JDK. JavaFX was removed from the JDK with Java 11 more than seven years ago.

An October 29 post by Bruce Haddon on an OpenJDK discussion list argues that the reasons for the separation of JavaFX from the JDK—namely, that JavaFX contributed greatly to the bloat of the JDK, that the separation allowed the JDK and JavaFX to evolve separately, and that the development and maintenance of JavaFX had moved from Oracle to Gluon—are much less applicable today. Haddon notes that JDK bloat has been addressed by modularization, that the JDK and the JavaFX releases have kept in lockstep, and that both Java and JavaFX developments are available in open source
(OpenJDK and OpenJFX), so integrating the releases would still permit community involvement and innovation.

“Further, it would be of great convenience to developers not to have to make two installations and then configure their IDEs to access both libraries (not really easy in almost all IDEs, requiring understanding of many otherwise ignorable options of each IDE),” Haddon wrote. “It is both my belief and my recommendation that the time has come for the re-integration of JavaFX (as the preferred GUI feature) with the rest of the JDK.”

In response to an InfoWorld inquiry, Oracle on October 30 released the following statement from Donald Smith, Oracle vice president of Java product management: “Oracle continues to lead and be active in the OpenJFX Project. While we don’t have specific announcements or plans currently, we are investigating options for improving the approachability of JavaFX with the JDK.”

JavaFX was launched in 2007 by Sun Microsystems. It now is billed as an open source, next-generation client application platform for desktop, mobile, and embedded systems built on Java. JavaFX releases for Linux, macOS, and Windows can be downloaded from Gluon.

(image/jpeg; 4.86 MB)

OpenAI launches Aardvark to detect and patch hidden bugs in code 31 Oct 2025, 5:15 am

OpenAI has unveiled Aardvark, a GPT-5-powered autonomous agent designed to act like a human security researcher capable of scanning, understanding, and patching code with the reasoning skills of a professional vulnerability analyst.

Announced on Thursday and currently available in private beta, Aardvark is being positioned as a major leap toward AI-driven software security.

Unlike conventional scanners that mechanically flag suspicious code, Aardvark attempts to analyze how and why code behaves in a particular way. “OpenAI Aardvark is different as it mimics a human security researcher,” said Pareekh Jain, CEO at EIIRTrend. “It uses LLM-powered reasoning to understand code semantics and behavior, reading and analyzing code the way a human security researcher would.”

By embedding itself directly into the development pipeline, Aardvark aims to turn security from a post-development concern into a continuous safeguard that evolves with the software itself, Jain added.

From code semantics to validated patches

What makes Aardvark unique, OpenAI noted, is its combination of reasoning, automation, and verification. Rather than simply highlighting potential vulnerabilities, the agent promises multi-stage analysis–starting by mapping an entire repository and building a contextual threat model around it. From there, it continuously monitors new commits, checking whether each change introduces risk or violates existing security patterns.

Additionally, upon identifying a potential issue, Aardvark attempts to validate the exploitability of the finding in a sandboxed environment before flagging it.

This validation step could prove transformative. Traditional static analysis tools often overwhelm developers with false alarms–issues that may look risky but aren’t truly exploitable. “The biggest advantage is that it will reduce false positives significantly,” noted Jain. “It’s helpful in open source codes and as part of the development pipeline.”

Once a vulnerability is confirmed, Aardvark integrates with Codex to propose a patch, then re-analyzes the fix to ensure it doesn’t introduce new problems. OpenAI claims that in benchmark tests, the system identified 92 percent of known and synthetically introduced vulnerabilities across test repositories, a promising indication that AI may soon shoulder part of the burden of modern code auditing.

Securing open source and shifting security left

Aardvark’s role extends beyond enterprise environments. OpenAI has already deployed it across open-source repositories, where it claims to have discovered multiple real-world vulnerabilities, ten of which have received official CVE identifiers. The LLM giant said it plans to provide pro-bono scanning for selected non-commercial open-source projects, under a coordinated disclosure framework that gives maintainers time to address the flaws before public reporting.

This approach aligns with a growing recognition that software security isn’t just a private-sector problem, but a shared ecosystem responsibility. “As security is becoming increasingly important and sophisticated, these autonomous security agents will be helpful to both big and small enterprises,” Jain added.

OpenAI’s announcement also reflects a broader industry concept known as “shifting security left,” embedding security checks directly into development, rather than treating them as end-of-cycle testing. With over 40,000 CVE-listed vulnerabilities reported annually and the global software supply chain under constant attack, integrating AI into the developer workflow could help balance velocity with vigilance, the company added.

(image/jpeg; 20.51 MB)

Agentic AI: What now, what next? 31 Oct 2025, 3:00 am

Download the November 2025 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network World.

(image/jpeg; 0.42 MB)

Learning from the AWS outage: Actions and resources 31 Oct 2025, 2:00 am

It has become cliché to say that the cloud is the backbone of digital transformation, but cloud outages like the recent AWS incident make enterprise dependence on the cloud painfully clear. Last week’s AWS outage impacted thousands of businesses worldwide, from SaaS providers to e-commerce companies. Revenue streams paused or evaporated, customer experiences soured, and brand reputations were at stake.

For enterprises that suffer direct financial losses from any outage, the frustration runs deep. As someone who has advised organizations on cloud architecture for decades, I often hear the same question after these events: What can we do to recover our losses and prevent devastating disruptions in the future?

The first step for any enterprise is to gather the facts about the outage and its impact. Cloud providers like AWS are quick to produce incident reports and public updates that usually detail what went wrong, how long it took to resolve, and which services were affected. It’s easy to get distracted by blame, but understanding the technical and contractual realities gives you your best shot at effective recourse. For enterprises, the key information to collect is:

  • What services or workloads were impacted and for how long?
  • What were the direct business consequences? Missed transactions, customer attrition, or downstream costs?
  • What does your service-level agreement (SLA) actually guarantee, and did the outage breach those guarantees?

It’s not enough to know that “the cloud was down.” The specifics—duration, affected zones, the criticality of business functionality—will determine your next steps.

Cloud SLAs and compensation

Here’s one of the harsh realities I’ve encountered: Most enterprises overestimate what their public cloud agreements guarantee. AWS, Azure, and Google Cloud (along with other hyperscalers) offer clear-cut SLAs, but the compensation for outages is almost always limited and rarely covers your actual business losses.

Typically, SLAs offer service credits based on a percentage of your affected monthly usage. For example, if your web application is unavailable for two hours and the SLA states “99.99% uptime,” you might receive a percentage credit for future usage. These credits are better than nothing, but for enterprises facing six-figure losses from a major outage, they are a mere drop in the bucket.

It’s important to recognize that compensation usually requires you to file a claim, often within a limited timeframe, and depends on your ability to demonstrate direct impact. Providers will not cover consequential or indirect damage such as lost sales, contractual penalties from your own clients, or damage to your brand. These are your problems, not theirs. Although this is difficult to accept, understanding it up front is better than being caught off guard.

Limits of legal recourse

Could you go further and pursue legal action? The answer is rarely satisfying. The standard cloud contract, designed by swarms of well-paid lawyers, strongly limits the provider’s liability. Most terms of service explicitly exclude responsibility for consequential and indirect losses and cap direct damages at the amount you paid in the previous month. Unless the provider acted in bad faith or with gross negligence—which is very hard to prove—courts tend to uphold these contracts.

Occasionally, if your outage has broader impacts, such as a widely used financial platform that prompts regulatory scrutiny, high-profile cases may occur. But for most companies, the only realistic recourse is through the SLA credit process. Pursuing a lawsuit not only incurs substantial legal costs, but it is rarely worth your time compared to the minor damages you might recover.

Assess your business continuity strategy

The next step is to evaluate your organization’s risk profile and cloud architecture. In the tech world, the saying “Don’t put all your eggs in one basket” matters as much for computing as for investments. While cloud engineering teams often believe in the robust, distributed nature of the public cloud, outages expose uncomfortable truths: Single-region deployments, insufficient failover mechanisms, and a lack of multicloud or hybrid strategies often leave businesses vulnerable.

It is critical to conduct an honest post-mortem. Which systems failed and why? Did you rely solely on a single cloud provider or region without proper replication or fallback? Did your own resilience measures, such as automated failover, work in practice as well as in planning?

Many organizations realize too late that their cloud backup was misconfigured, that critical systems lacked redundant design, or that their disaster recovery playbooks were outdated or untested. These gaps turn a provider’s outage into a companywide crisis.

Three steps to true resilience

In the aftermath of a public cloud outage, enterprises must eventually move beyond seeking compensation and develop meaningful protection strategies. Drawing on lessons from this and previous incidents, here are three essential steps every organization should take.

First, review your architecture and deploy real redundancy. Leverage multiple availability zones within your primary cloud provider and seriously consider multiregion and even multicloud resilience for your most critical workloads. If your business cannot tolerate extended downtime, these investments are no longer optional.

Second, review and update your incident response and disaster recovery plans. Theoretical processes aren’t enough. Regularly test and simulate outages at the technical and business process levels. Ensure that playbooks are accurate, roles and responsibilities are clear, and every team knows how to execute under stress. Fast, coordinated responses can make the difference between a brief disruption and a full-scale catastrophe.

Third, understand your cloud contracts and SLAs and negotiate better terms if possible. Speak with your providers about custom agreements if your scale can justify them. Document outages carefully and file claims promptly. More importantly, factor the actual risks—not just the “guaranteed” uptime—into your business and customer SLAs.

Cloud outages are no longer rare. As enterprises deepen their reliance on the cloud, the risks rise. The most resilient businesses will treat each outage as a crucial learning opportunity to strengthen both technical defenses and contractual agreements before the next problem occurs. As always, the best offense is a strong defense.

(image/jpeg; 5.09 MB)

Rust 1.91 promotes Windows on Arm64 to Tier 1 target 30 Oct 2025, 4:24 pm

The Rust Release Team has released Rust 1.91, an update of the popular memory-safe programming language that promotes Windows on Arm64 platform to a Tier 1 supported target.

Rust 1.91 was announced October 30. Previous users can upgrade by running rustup update stable.

With Rust 1.91, the aarch64-pc-windows-msvc target is promoted to Tier 1 support, bringing the highest guarantees to users of 64-bit Arm systems running Windows, the Rust Release Team said. Tier 1 targets provide the highest support guarantees, with the project’s full test suite run on those platforms for every change merged in the compiler. Prebuilt binaries also are available.

Also in Rust 1.91, the team has added a warn-by-default lint on raw pointers to local variables being returned from functions. Although Rust’s borrow checking prevents dangling references from being returned, it does not track raw pointers.

Rust 1.91 also stabilizes 60 APIs and makes seven previously stable APIs stable in const contexts. The release also stabilizes the build.build-dir config of the Cargo package manager. This config sets the directory where intermediate build artifacts are stored, the Rust Release Team said. These artifacts are produced by Cargo and rustc during the build process.  

Rust 1.91 follows the September 18 release of Rust 1.90. That release offered native support for workspace publishing for Cargo.

The Rust language is positioned as being fast and memory-efficient, with no runtime or garbage collector, and the ability to power performance-critical services and embedded devices.

(image/jpeg; 29.91 MB)

Visual Studio October update adds Claude coding models 30 Oct 2025, 3:17 pm

Microsoft has released the October 2025 update for Visual Studio 2022 (v17.14). The new release makes the Claude Sonnet 4.5 and Claude Haiku 4.5 coding models available in the GitHub Copilot chat window. Also for GitHub Copilot, the update adds Copilot memories, a feature that allows Copilot to remember project preferences, as well as support for instruction files and built-in planning for multi-step tasks.

The Visual Studio 2022 October update was announced October 30 and can be downloaded from visualstudio.microsoft.com.

With Claude Sonnet 4.5  and Claude Haiku 4.5 available in the GitHub Copilot chat window, the latest innovations for driving agentic workflows are right at the developer’s fingertips, Microsoft said. Copilot memories, meanwhile, enable Copilot to understand and apply a project’s specific coding standards, making the AI assistant project-aware and consistent across sessions. Memories use intelligent detection to understand project preferences as users prompt Copilot in the chat. Copilot then stores the preference in one of three files — .editorconfig for coding standards, CONTRIBUTING.md for best practices and guidelines, and README.md for high-level project information.

Also with the Visual Studio October update, GitHub Copilot in the IDE allows users to target specific instructions to specific folders or files in a repository by using instruction files. In addition, GitHub Copilot chat now has built-in planning to help guide multi-step tasks. When developers ask a complex question, Copilot automatically creates a markdown plan file with a task list, files to edit, and context. Copilot updates the plan in real time, tracking progress, adapting to blockers, and keeping its logic transparent, Microsoft said. 

Finally, two new commands are available for managing GitHub Copilot chat threads, /clear and /clearAll. The /clear command is to be used when issues are encountered with a current chat conversation that the developer would like to start fresh, while /clearAll can be used for clearing no-longer-needed threads.

(image/jpeg; 20.13 MB)

Unit testing Spring MVC applications with JUnit 5 30 Oct 2025, 2:00 am

Spring is a reliable and popular framework for building web and enterprise Java applications. In this article, you’ll learn how to unit test each layer of a Spring MVC application, using built-in testing tools from JUnit 5 and Spring to mock each component’s dependencies. In addition to unit testing with MockMvc, Mockito, and Spring’s TestEntityManager, I’ll also briefly introduce slice testing using the @WebMvcTest and @DataJpaTest annotations, used to optimize unit tests on web controllers and databases.

Also see: How to test your Java applications with JUnit 5.

Overview of testing Spring MVC applications

Spring MVC applications are defined using three technology layers:

  • Controllers accept web requests and return web responses.
  • Services implement the application’s business logic.
  • Repositories persist data to and from your back-end SQL or NoSQL database.

When we unit test Spring MVC applications, we test each layer separately from the others. We create mock implementations, typically using Mockito, for each layer’s dependencies, then we simulate the logic we want to test. For example, a controller may call a service to retrieve a list of objects. When testing the controller, we create a mock service that either returns the list of objects, returns an empty list, or throws an exception. This test ensures the controller behaves correctly.

We’ll use Spring MVC to build and test a simple web service that manages widgets. The structure of the web service is shown here:

Diagram of a Spring MVC web service application.

Steven Haines

This is a classic MVC pattern. We have a widget controller that handles RESTful requests and delegates its business functionality to a widget service, which uses a widget repository to persist widgets to and from an in-memory H2 database.

Get the source: Download the source code for this article.

Unit testing a Spring MVC controller with MockMvc

Setting up a Spring MVC controller test is a two-step process:

  • Annotate your test class with @WebMvcTest.
  • Autowire a MockMvc instance into your controller.

We could annotate all our test classes with @SpringBootTest, but we’ll use @WebMvcTest instead. The reason is that the @WebMvcTest annotation is used for slice testing. Whereas @SpringBootTest loads your entire Spring application context, @WebMvcTest loads only your web-related resources. Furthermore, if you specify a controller class in the annotation, it will only load the specific controller you want to test. Testing a single “slice” of your application reduces both the amount of compute resources required to set up the test and the time required to run a test.

For example, when we test a controller, we’ll mock just the services it uses, and we won’t need any repositories at all. If we don’t need them, then we needn’t waste time loading them. Slice tests were created to make tests perform better and run faster.

Here’s the source code for the Widget class we’ll be managing:

package com.infoworld.widgetservice.model;
import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.GenerationType;
import jakarta.persistence.Id;

@Entity
public class Widget {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    private String name;
    private int version;

    public Widget() {
    }

    public Widget(String name) {
        this.name = name;
    }

    public Widget(String name, int version) {
        this.name = name;
        this.version = version;
    }

    public Widget(Long id, String name, int version) {
        this.id = id;
        this.name = name;
        this.version = version;
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getVersion() {
        return version;
    }

    public void setVersion(int version) {
        this.version = version;
    }
}

A Widget is a JPA entity that manages three fields:

  • id is the primary key of the table, annotated with @Id and @GeneratedValue, with an automatic generation strategy.
  • name is the name of the widget.
  • version is the version of the widget resource. We’ll use this value to populate our eTag value and check it in our PUT operation’s If-Match header value. This ensures the widget being updated is not stale.

Here’s the source code for the controller we’ll be testing (WidgetController.java):

package com.infoworld.widgetservice.web;

import java.net.URI;
import java.net.URISyntaxException;
import java.util.List;
import java.util.Optional;
import com.infoworld.widgetservice.model.Widget;
import com.infoworld.widgetservice.service.WidgetService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestHeader;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class WidgetController {
    @Autowired
    private WidgetService widgetService;
    @GetMapping("/widget/{id}")
    public ResponseEntity> getWidget(@PathVariable Long id) {
        return widgetService.findById(id)
                .map(widget -> {
                    try {
                        return ResponseEntity
                                .ok()
                                .location(new URI("/widget/" + id))
                                .eTag(Integer.toString(
                                               widget.getVersion()))
                                .body(widget);
                    } catch (URISyntaxException e) {
                        return ResponseEntity
                          .status(HttpStatus.INTERNAL_SERVER_ERROR)
                          .build();
                    }
                })
                .orElse(ResponseEntity.notFound().build());
    }
    @GetMapping("/widgets")
    public List getWidgets() {
        return widgetService.findAll();
    }
    @PostMapping("/widgets")
    public ResponseEntity> createWidget(@RequestBody Widget widget)
    {
        Widget newWidget = widgetService.create(widget);
        try {
           return ResponseEntity
                   .created(new URI("/widget/" + newWidget.getId()))
                   .eTag(Integer.toString(newWidget.getVersion()))
                   .body(newWidget);
        } catch (URISyntaxException e) {
            return ResponseEntity
                    .status(HttpStatus.INTERNAL_SERVER_ERROR)
                    .build();
        }
    }

    @PutMapping("/widget/{id}")
    public ResponseEntity> updateWidget(@PathVariable Long id,
                                          @RequestBody Widget widget,
                         @RequestHeader("If-Match") Integer ifMatch) {
        Optional existingWidget = widgetService.findById(id);
        return existingWidget.map(w -> {
            if (w.getVersion() != ifMatch) {
                return ResponseEntity.status(HttpStatus.CONFLICT)
                                     .build();
            }

            w.setName(widget.getName());
            w.setVersion(w.getVersion() + 1);

            Widget updatedWidget = widgetService.save(w);
            try {
                return ResponseEntity.ok()
                        .location(new URI("/widget/" + 
                                      updatedWidget.getId()))
                        .eTag(Integer.toString(
                                      updatedWidget.getVersion()))
                        .body(updatedWidget);
            } catch (URISyntaxException e) {
                throw new RuntimeException(e);
            }
        }).orElse(ResponseEntity.notFound().build());
    }

    @DeleteMapping("widget/{id}")
    public ResponseEntity> deleteWidget(@PathVariable Long id) {
        Optional existingWidget = widgetService.findById(id);
        return existingWidget.map(w -> {
           widgetService.deleteById(w.getId());
           return ResponseEntity.ok().build();
        }).orElse(ResponseEntity.notFound().build());
    }
}

The WidgetController handles GET, POST, PUT, and DELETE operations, following standard RESTful principles, so we’re going to write tests for each operation.

The following source code shows the structure of our test class (WidgetControllerTest.java):

package com.infoworld.widgetservice.web;

@WebMvcTest(WidgetController.class)
public class WidgetControllerTest {
    @Autowired
    private MockMvc mockMvc;

    @MockitoBean
    private WidgetService widgetService;
}

I omitted the imports for readability, but the important thing to note is that the class is annotated with the @WebMvcTest annotation, and that we pass in the WidgetController.class as the controller we’re testing. This tells Spring to only load the WidgetController and no other Spring resources. The @WebMvcTest annotation includes other annotations, but the important one for our tests is @AutoConfigureMockMvc, which will cause Spring to create a MockMvc instance and add it to the application context. That lets us autowire it into our test class using the @Autowired annotation.

Next, we use the @MockitoBean annotation to use Mockito to create a mock implementation of the WidgetService, after which Spring will autowire it into the WidgetController class. This lets us control the behavior of the WidgetService for the WidgetController test cases we’re writing. Note that starting in Spring Boot version 3.4, @MockitoBean replaced @MockBean. Everything you know about @MockBean translates to using @MockitoBean—with some improvements.

Unit testing GET /widgets

Let’s start with the easiest test case, a test for GET /widgets:

@Test
void testGetWidgets() throws Exception {
    List widgets = new ArrayList();
    widgets.add(new Widget(1L, "Widget 1", 1));
    widgets.add(new Widget(2L, "Widget 2", 1));
    widgets.add(new Widget(3L, "Widget 3", 1));

    when(widgetService.findAll()).thenReturn(widgets);

    mockMvc.perform(get("/widgets"))
            .andExpect(status().isOk())
            .andExpect(jsonPath("$.length()").value(3))
            .andExpect(jsonPath("$[0].id").value(1L))
            .andExpect(jsonPath("$[0].name").value("Widget 1"))
            .andExpect(jsonPath("$[0].version").value(1));
};

The testGetWidgets() method creates a list of three widgets and then configures the mock WidgetService to return the list when its findAll() method is called. The WidgetControllerTest class statically imports the org.mockito.Mockito.when() method that accepts a method call, which in this case is widgetService.findAll(), and returns a Mockito OngoingStubbing instance. This OngoingStubbing instance exposes methods like thenReturn(), thenThrow(), thenCallRealMethod(), thenAnswer(), and then().

Here, we use the thenReturn() method to tell Mockito to return the list of widgets when the WidgetService’s findAll() method is called. The @MockitoBean annotation causes the mock WidgetService to be autowired into the WidgetController. So, when the getWidgets() method is called in response to a GET /widgets, it calls the WidgetService’s findAll() method and returns our list of widgets as a web response.

Next, we use MockMvc’s perform() method to execute a web request. This diagram shows the various classes that interact with the perform() method:

Diagram of classes that interact with the MockMvc perform() method.

Steven Haines

The perform() method accepts a RequestBuilder. Spring defines several built-in RequestBuilders that we can statically import into our tests, including get(), post(), put(), and delete(). The perform() method returns a ResultActions instance that exposes methods such as andExpect(), andExpectAll(), andDo(), and andReturn(). Here, we invoke the andExpect() method, which accepts a ResultMatcher.

A ResultMatcher defines a match() method that throws an AssertionError if the assertion fails. Spring defines several ResultMatchers that we can statically import:

  • status() allows us to check the HTTP status code of response.
  • content() allows us to check the content headers of the response, such as Content-Type.
  • header() allows us to check any of the HTTP header values.
  • jsonPath() allows us to inspect the contents of a JSON document.

After MockMvc performs a GET to /widgets, we expect the HTTP status code to be 200 OK. We can then use the jsonPath matcher to check the body results, using the following JSON path expressions:

  • $.length(): The $ references the root of the JSON document. If the response is a list, then we can call the length() method to get the number of elements in the list.
  • $[0].id: JSON path expressions for a list use an array syntax starting at 0. This expression gets the ID of the first element in the list.
  • $[0].name: This expression gets the name of the first element and compares it to “Widget 1”.
  • $[0].version: This expression gets the version of the first element and compares it to 1.

Unit testing the GET /widget/{id} handler

Here’s the source code to test the GET /coffee/{id} widget:

@Test
void testGetWidgetById() throws Exception {
    Widget widget = new Widget(1L, "My Widget", 1);          
    when(widgetService.findById(1L))
           .thenReturn(Optional.of(widget));

    mockMvc.perform(get("/widget/{id}", 1))
            // Validate that we get a 200 OK Response Code
            .andExpect(status().isOk())

            // Validate Headers
            .andExpect(content()
                      .contentType(MediaType.APPLICATION_JSON))
            .andExpect(header().string(HttpHeaders.LOCATION,
                                       "/widget/1"))
            .andExpect(header().string(HttpHeaders.ETAG, "\"1\""))

            // Validate content
            .andExpect(jsonPath("$.id").value(1L))
            .andExpect(jsonPath("$.name").value("My Widget"))
            .andExpect(jsonPath("$.version").value(1));
 }

This test method is very similar to the testGetWidgets() method, but with some notable changes:

  • The GET URI is defined using a URI template. You can specify any number of variables enclosed in braces in the URI template and then send a list of arguments that will replace those variables in the order they appear in the template.
  • We check that the returned Content-Type is “application/json”, which is a constant in the MediaType class. We access the content using the content() method, which returns a ContentResultMatchers instance that provides various methods, including contentType(), which allows us to validate the content headers.
  • We check for specific header values using the header() method. The header() method returns a HeadersResultMatchers instance, which can check for header String, long, and date values, as well as checking to see whether or not specific headers exist. In this case, we use constants defined in the HttpHeaders class to check the location and eTag header values.
  • We check the body of the response using JSON path expressions. In this case, we do not have a list of objects, so we can access the individual fields in the JSON document directly. For example, $.id retrieves the id field value in the root of the document.

Unit testing a GET /widget/{id} Not Found code

Next, we test the GET /widget/{id}, passing it an invalid ID so that it returns a 404 Not Found response code:

@Test
void testGetWidgetByIdNotFound() throws Exception {
   when(widgetService.findById(1L)).thenReturn(Optional.empty());

   mockMvc.perform(get("/widget/{id}", 1))
            // Validate that we get a 404 Not Found Response Code
            .andExpect(status().isNotFound());
}

The testGetWidgetByIdNotFound() method configures the mock WidgetService to return Optional.empty() when its findById() is called with a value of 1. We then perform a GET request to /widget/1, then assert that the returned HTTP status code is 404 Not Found.

Unit testing POST /widgets

Here’s how to test a Widget creation:

@Test
void testCreateWidget() throws Exception {
    Widget widget = new Widget(1L, "Widget 1", 1);
    when(widgetService.create(any())).thenReturn(widget);

    mockMvc.perform(post("/widgets")
            .contentType(MediaType.APPLICATION_JSON)
            .content("{\"name\": \"Widget 1\"}"))

            // Validate that we get a 201 Created Response Code
            .andExpect(status().isCreated())

            // Validate Headers
            .andExpect(content().contentType(
                                      MediaType.APPLICATION_JSON))
            .andExpect(header().string(HttpHeaders.LOCATION, 
                                       "/widget/1"))
            .andExpect(header().string(HttpHeaders.ETAG, "\"1\""))

            // Validate content
            .andExpect(jsonPath("$.id").value(1L))
            .andExpect(jsonPath("$.name").value("Widget 1"))
            .andExpect(jsonPath("$.version").value(1));

The testCreateWidget() method first creates a Widget to return when the WidgetService’s create() method is called with any argument. The any() matcher matches any argument and, because the createWidget() handler will create a new Widget instance, we will not have access to that instance when the test runs. We then invoke MockMvc’s perform() method to the ”/widgets” URI, sending the content body of a new widget named “Widget 1”, using the content() method. We expect a 201 Created HTTP response code, an “application/json” content type, a location header of “/widget/1”, and an eTag value of the String1”. The body of the response should match the Widget we returned from the create() method, namely an ID of 1, a name of “Widget 1”, and a version of 1.

Unit testing PUT /widget

This code runs three tests for the PUT operation:

@Test
public void testSuccessfulUpdate() throws Exception {
    // Create a mock Widget when the WidgetService's findById(1L) 
    // is called
    Widget mockWidget = new Widget(1L, "Widget 1", 5);
    when(widgetService.findById(1L))
                      .thenReturn(Optional.of(mockWidget));

    // Create a mock Coffee that is returned when the 
    // CoffeeController saves the Coffee to the database
    Widget savedWidget = new Widget(1L, "Updated Widget 1", 6);
    when(widgetService.save(any())).thenReturn(savedWidget);

    // Execute a PUT /widget/1 with a matching version: 5
    mockMvc.perform(put("/widget/{id}", 1L)
                    .contentType(MediaType.APPLICATION_JSON)
                    .header(HttpHeaders.IF_MATCH, 5)
                    .content("{\"id\": 1, " +
                             "\"name\": \"Updated Widget 1\"}"))

            // Validate that we get a 200 OK HTTP Response
           .andExpect(status().isOk())

            // Validate the headers
           .andExpect(content()
                        .contentType(MediaType.APPLICATION_JSON))
           .andExpect(header().string(HttpHeaders.LOCATION, 
                                      "/widget/1"))
           .andExpect(header().string(HttpHeaders.ETAG, "\"6\""))

           // Validate the contents of the response
           .andExpect(jsonPath("$.id").value(1L))
           .andExpect(jsonPath("$.name")
                               .value("Updated Widget 1"))
           .andExpect(jsonPath("$.version").value(6));
}

@Test
public void testUpdateConflict() throws Exception {
   // Create a mock coffee with a version set to 5
   Widget mockWidget = new Widget(1L, "Widget 1", 5);

    // Return the mock Coffee when the CoffeeService's 
    // findById(1L) is called
    when(widgetService.findById(1L))
                      .thenReturn(Optional.of(mockWidget));

    // Execute a PUT /widget/1 with a mismatched version number: 2
    mockMvc.perform(put("/widget/{id}", 1L)
                    .contentType(MediaType.APPLICATION_JSON)
                    .header(HttpHeaders.IF_MATCH, 2)
                    .content("{\"id\": 1, " + 
                             "\"name\":  \"Updated Widget 1\"}"))
             // Validate that we get a 409 Conflict HTTP Response
            .andExpect(status().isConflict());
}

@Test
public void testUpdateNotFound() throws Exception {
   // Return the mock Coffee when the CoffeeService's 
   // findById(1L) is called
   when(widgetService.findById(1L)).thenReturn(Optional.empty());

   // Execute a PUT /coffee/1 with a mismatched version number: 2
   mockMvc.perform(put("/widget/{id}", 1L)
                    .contentType(MediaType.APPLICATION_JSON)
                    .header(HttpHeaders.IF_MATCH, 2)
                    .content("{\"id\": 1, " + 
                             "\"name\":  \"Updated Coffee 1\"}"))

           // Validate that we get 404 Not Found
           .andExpect(status().isNotFound());
}

We have three variations:

  • A successful update.
  • A failed update because of a version conflict.
  • A failed update because the widget was not found.

In RESTful web services, version management is handled by the entity tag, or eTag. When you retrieve an entity, it has an eTag value. When you want to update the entity, you pass that eTag value in the If-Match HTTP header. If the If-Match header does not match the current eTag, which is the Widget version in our implementation, then the PUT handler returns a 409 Conflict HTTP response code. If you get this error, it means that you need to retrieve the entity again and retry your operation. This way, if two different clients attempt to update the same entity simultaneously, only one will succeed.

In the testSuccessfulUpdate() method, we return a Widget with a version of 5 when the WidgetService’s findById() method is called. We then pass an If-Match header value of 5 and then validate that we get a 200 OK HTTP response code and the expected header and body values. In the testUpdateConflict() method, we do the same thing, but we set the If-Match header to 2, which does not match 5, so we validate that we get a 409 Conflict HTTP response code. And finally, in the testUpdateNotFound() method, we configure the WidgetService to return an Optional.empty() when its findById() method is called, so we execute the PUT operation and validate that we get a 404 Not Found HTTP response code.

Unit testing DELETE /widget

Finally, here is the source code for our two DELETE /widget tests:

@Test
void testDeleteSuccess() throws Exception {
    // Setup mocked product
    Widget mockWidget = new Widget(1L, "Widget 1", 5);

    // Setup the mocked service
    when(widgetService.findById(1L))
                      .thenReturn(Optional.of(mockWidget));
    doNothing().when(widgetService).deleteById(1L);

    // Execute our DELETE request
    mockMvc.perform(delete("/widget/{id}", 1L))
            .andExpect(status().isOk());
}

@Test
void testDeleteNotFound() throws Exception {
    // Setup the mocked service
    when(widgetService.findById(1L)).thenReturn(Optional.empty());

    // Execute our DELETE request
    mockMvc.perform(delete("/widget/{id}", 1L))
            .andExpect(status().isNotFound());
}

The DELETE handler first tries to find the widget by ID and then calls the WidgetService’s deleteById() method. The testDeleteSuccess() method configures the WidgetService to return a mock Widget when the findById() method is called and then configures it to do nothing when the deleteById() method is called. The deleteById() method returns void, so we do not need to mock a response, though we do want to allow the method to be called. We execute the DELETE operation and validate that we receive a 200 OK HTTP response code. The testDeleteNotFound() method configures the WidgetService to return Optional.empty() when its findById() method is called. We execute the DELETE operation and validate that we receive a 404 Not Found HTTP response code.

At this point, we have a comprehensive set of tests for all of our controller operations. Let’s continue down our stack and test our service.

Unit testing a Spring MVC service

Next, we’ll test a WidgetService class, shown here:

package com.infoworld.widgetservice.service;

import java.util.List;
import java.util.Optional;

import com.infoworld.widgetservice.model.Widget;
import com.infoworld.widgetservice.repository.WidgetRepository;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class WidgetService {
    @Autowired
    private WidgetRepository widgetRepository;

    public List findAll() {
        return widgetRepository.findAll();
    }

    public Optional findById(Long id) {
        return widgetRepository.findById(id);
    }

    public Widget create(Widget widget) {
        widget.setVersion(1);
        return widgetRepository.save(widget);
    }

    public Widget save(Widget widget) {
        return widgetRepository.save(widget);
    }

    public void deleteById(Long id) {
        widgetRepository.deleteById(id);
    }
}

The WidgetService is very simple. It autowires in a WidgetRepository and then delegates almost all its functionality to the WidgetRepository. The only business logic it implements is that it sets the Widget version to 1 in the create() method, when it is persisting a new Widget to the database.

While Spring supports slice testing for our controller and (as you’ll soon see) our repository, it doesn’t have a slice testing annotation for our service. We could use the @SpringBootTest annotation, but then Spring would load all the controllers, repositories, and any other Spring resources in our application into the Spring application context. We can avoid by using Mockito directly.

Here is the source code for the WidgetServiceTest class:

package com.infoworld.widgetservice.service;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.mockito.Mockito.when;

import java.util.Optional;

import com.infoworld.widgetservice.model.Widget;
import com.infoworld.widgetservice.repository.WidgetRepository;

import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;

@ExtendWith(MockitoExtension.class)
public class WidgetServiceTest {
    @Mock
    private WidgetRepository repository;

    @InjectMocks
    private WidgetService service;

    @Test
    void testFindById() {
        Widget widget = new Widget(1L, "My Widget", 1);
        when(repository.findById(1L)).thenReturn(Optional.of(widget));

        Optional w = service.findById(1L);
        assertTrue(w.isPresent());
        assertEquals(1L, w.get().getId());
        assertEquals("My Widget", w.get().getName());
        assertEquals(1, w.get().getVersion());
    }
}

JUnit 5 supports extensions and Mockito has defined a test extension that we can access through the @ExtendWith annotation. This extension allows Mockito to read our class, find objects to mock, and inject mocks into other classes. The WidgetServiceTest tells Mockito to create a mock WidgetRepository, by annotating it with the @Mock annotation, and then to inject that mock into the WidgetService, using the @InjectMocks annotation. The result is that we have a WidgetService that we can test and it will have a mock WidgetRepository that we can configure for our test cases.

Also see: Advanced unit testing with JUnit 5, Mockito, and Hamcrest.

This is not a comprehensive test, but it should get you started. It has a single method, testFindById(), that demonstrates how to test a service method. It creates a mock Widget instance and then uses the Mockito when() method, just as we used in the controller test, to configure the WidgetRepository to return an Optional of that Widget when its findById() method is called. Then it invokes the WidgetService’s findById() method and validates that the mock Widget is returned.

Slice testing a Spring Data JPA repository

Next, we’ll slice test our JPA repository (WidgetRepository.java), shown here:

package com.infoworld.widgetservice.repository;

import java.util.List;
import com.infoworld.widgetservice.model.Widget;
import org.springframework.data.jpa.repository.JpaRepository;

public interface WidgetRepository extends JpaRepository {
    List findByName(String name);
}

The WidgetRepository is a Spring Data JPA repository, which means that we define the interface and Spring generates the implementation. It extends the JpaRepository interface, which accepts two arguments:

  • The type of entity that it persists, namely a Widget.
  • The type of primary key, which in this case is a Long.

It generates common CRUD method implementations for us to create, update, delete, and find widgets, and then we can define our own query methods using a specific naming convention. For example, we define a findByName() method that returns a List of Widgets. Because “name” is a field in our Widget entity, Spring will generate a query that finds all widgets with the specified name.

Here is our WidgetRepositoryTest class:

package com.infoworld.widgetservice.repository;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

import com.infoworld.widgetservice.model.Widget;

import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.boot.test.autoconfigure.orm.jpa.TestEntityManager;

@DataJpaTest
public class WidgetRepositoryTest {
    @Autowired
    private TestEntityManager entityManager;

    @Autowired
    private WidgetRepository widgetRepository;

    private final List widgetIds = new ArrayList();
    private final List testWidgets = Arrays.asList(
            new Widget("Widget 1", 1),
            new Widget("Widget 2", 1),
            new Widget("Widget 3", 1)
    );

    @BeforeEach
    void setup() {
        testWidgets.forEach(widget -> {
            entityManager.persist(widget);
            widgetIds.add((Long)entityManager.getId(widget));
        });
        entityManager.flush();
    }

    @AfterEach
    void teardown() {
        widgetIds.forEach(id -> {
            Widget widget = entityManager.find(Widget.class, id);
            if (widget != null) {
                entityManager.remove(widget);
            }
        });
        widgetIds.clear();
    }

    @Test
    void testFindAll() {
        List widgetList = widgetRepository.findAll();
        assertEquals(3, widgetList.size());
    }

    @Test
    void testFindById() {
        Widget widget = widgetRepository.findById(
                               widgetIds.getFirst()).orElse(null);

        assertNotNull(widget);
        assertEquals(widgetIds.getFirst(), widget.getId());
        assertEquals("Widget 1", widget.getName());
        assertEquals(1, widget.getVersion());
    }

    @Test
    void testFindByIdNotFound() {
        Widget widget = widgetRepository.findById(
            widgetIds.getFirst() + testWidgets.size()).orElse(null);
        assertNull(widget);
    }

    @Test
    void testCreateWidget() {
        Widget widget = new Widget("New Widget", 1);
        Widget insertedWidget = widgetRepository.save(widget);

        assertNotNull(insertedWidget);
        assertEquals("New Widget", insertedWidget.getName());
        assertEquals(1, insertedWidget.getVersion());
        widgetIds.add(insertedWidget.getId());
    }

    @Test
    void testFindByName() {
        List found = widgetRepository.findByName("Widget 2");
        assertEquals(1, found.size(), "Expected to find 1 Widget");

        Widget widget = found.getFirst();
        assertEquals("Widget 2", widget.getName());
        assertEquals(1, widget.getVersion());
    }
}

The WidgetRepositoryTest class is annotated with the @DataJpaTest annotation, which is a slice-testing annotation that loads repositories and entities into the Spring application context and creates a TestEntityManager that we can autowire into our test class. The TestEntityManager allows us to perform database operations outside of our repository so that we can set up and tear down our test scenarios.

In the WidgetRepositoryTest class, we autowire in both our WidgetRepository and TestEntityManager. Then, we define a setup() method that is annotated with JUnit’s @BeforeEach annotation, so it will be executed before each test case runs. Next, we define a teardown() method that is annotated with JUnit’s @AfterEach annotation, so it will be executed after each test completes. The class defines a testWidgets list that contains three test widgets and then the setup() method inserts those into the database using the TestEntityManager’s persist() method. After it inserts each widget, it saves the automatically generated ID so that we can reference it in our tests. Finally, after persisting the widgets, it flushes them to the database by calling the TestEntityManager’s flush() method. The teardown() method iterates over all Widget IDs, finds the Widget using the TestEntityManager’s find() method, and, if it is found, removes it from the database. Finally, it clears the widget ID list so that the setup() method can rebuild it for the next test. (Note that the TestEntityManager removes entities directly; it does not have a remove by ID method, so we first have to find each Widget and then remove them one-by-one.)

Even though most of the methods being tested are autogenerated and well tested, I wanted to demonstrate how to write several kinds of tests. The only method that we really need to test is the findByName() method because that is the only custom method we define. For example, if we were to define the method as findByNam() instead of findByName(), then the method would not work, so it is definitely worth testing.

Conclusion

Spring provides robust support for testing each layer of a Spring MVC application. In this article, we reviewed how to test controllers, using MockMvc; services, using the JUnit Mockito extension; and repositories, using the Spring TestEntityManager. We also reviewed slice testing as a strategy to reduce testing resource utilization and minimize the time required to execute tests. Slice testing is implemented in Spring using the @WebMvcTest and @DataJpaTest annotations. I hope these examples have given you everything you need to feel comfortable writing robust tests for your Spring MVC applications.

(image/jpeg; 5.29 MB)

Run Azure DevOps on premises 30 Oct 2025, 2:00 am

It shouldn’t be a surprise that there’s a lot of Azure you can run on premises, thanks to platforms like Azure Stack. But there’s more to on-premises Azure than the familiar platforms and services. There’s also a whole developer infrastructure that plugs into our day-to-day development environments, integrating with Visual Studio to give you the same continuous integration/continuous delivery (CI/CD) environment as cloud platforms like GitHub.

Azure DevOps Server is the replacement for Team Foundation Server, rebranding the on-premises tool and adding on-premises versions of many of the features in the cloud-hosted Azure DevOps Services. If you’re running TFS 2015 or later, you can upgrade, which may be a good option as TFS 2015 is no longer supported and will not get security updates.

A release candidate is now available. If you’ve used earlier versions, there’s one obvious change to the branding: The name no longer includes the year. This aligns it with the cloud Azure DevOps continuous delivery model, dropping the fixed life cycle and adopting what Microsoft calls its “modern life-cycle policy.” This requires you to stay up to date to get support, and there will be no more major named releases.

Requirements for Azure DevOps Server

If you need to keep your code on premises, say, you’re working in a regulated industry or you have privacy and security concerns, it provides the same core services as the cloud-hosted Azure DevOps Services, but instead of Microsoft’s infrastructure, you provide your own servers and storage.

There are two options for on-premises Azure DevOps Server instances, which support two quite different types of organization: a single server install and a multiple-server cluster. The first is best for small projects or independent developers; the second is for larger teams or where you want a single, reliable repository for an organization. Microsoft recommends at least 8 cores, 16GB of RAM, and SSD storage for a single server deployment. Adding Elastic Search requires a second CPU and an additional 8GB of RAM. The more RAM, the larger the organization you can support. A 16GB system will work for up to 250 users, 24GB for 500.

Behind the Azure DevOps Server is SQL Server database. You can start with the Express edition for independent and small team operations, scaling up to Standard or Enterprise versions for larger teams. The minimum supported version for the latest release is SQL Server 2019.

Multiple-server deployments start by splitting storage and the Azure DevOps Server into separate servers, with the possibility of using clustered storage to improve reliability. Other servers can be added to support code search and to run builds. The latest releases require a minimum operating system of Windows Server 2022. If you’re setting up a multiple-server Azure DevOps Server environment, be sure to have your database in place with access for the server.

Working with Azure DevOps Server

Installation is wizard-driven, much like any current Windows Server application, with a Configuration Center application that walks you through the process of setting up the tool and necessary dependencies. Each step has tests that make sure services are correctly configured and ready.

It also helps you manage configurations before installation is complete; once everything is in place and running, you can use the built-in Administration Console to handle further setup. You have the choice of setting up Basic, Advanced, or Azure. The Azure option is for using your own Azure DevOps Server in a virtual machine, connecting to Azure SQL storage.

Once installed, you can start to define the users, groups, and roles in your Azure DevOps organization. If you use Active Directory, it’s a good idea to create new Azure DevOps-specific groups and service accounts and then add users. Keeping development groups separate from other organizational functions gives you more control and avoids the complexity of having more than one service needing the same role-based access controls.

Microsoft recommends having three groups outside of server administrators: one for most users, with access to all projects; another for project managers and architects who can manage projects; and one with restricted permissions that can lock access, for contractors and staff who only need access to limited group projects.

Projects and pipelines

Projects are at the heart of Azure DevOps Server’s workflow. They’re where project administrators detail the scope of the project, and they give you a hub for collaboration around code, much like GitHub or similar social coding platforms. While the look-and-feel is very much Azure DevOps’ own, owing much of its design language to the familiar Azure Portal, the core collaboration methodology is very much like GitHub’s, with a set of Kanban-like boards to manage work tasks plus a code repository. The server also runs CI/CD pipelines, calling a locally hosted instance of the Azure Pipelines runner.

Once triggered, a pipeline behaves as a staged series of jobs. You can have multiple stages in a pipeline, for example, building code, then running tests, and finally deploying it. Each stage has multiple jobs, which are handed over to external applications, for example, using Microsoft’s build tools to compile Windows code. The pipeline collates the job results, checking to see if they have succeeded or failed before either aborting the run or starting the next stage. Jobs can be run in sequence or in parallel, so if you’re building a cross-platform application, it can run Windows, macOS, and Linux builds in parallel.

Building YAML pipelines

Pipelines are defined using YAML and can be managed using the same repository as the code they build. The new YAML pipeline editor replaces the original visual designer. Microsoft hasn’t yet deprecated this Classic Pipeline’s tool, but the writing is clearly on the wall, and the newer YAML editor is now recommended. Tools are provided to migrate existing Classic build pipelines to YAML; there’s no support for migrating release pipelines, so you’ll have to do this manually.

Azure DevOps Server provides its own browser-based editor to help build YAML pipelines, based on the same engine as Visual Studio Code. This has the necessary IntelliSense for code completion, as well as a task assistant that adds building blocks and code for specific tasks, such as calling out to the .NET CLI. You can quickly add commands in the task assistant edit box, and when ready, they are formatted in YAML and added to your pipeline.

The editor also includes tools to validate your YAML before it’s deployed. You can also download the YAML and edit it locally. Variables can manage secrets and other common data, using Azure DevOps rather than a public repository. The switch to YAML allows you to use templates to share common actions between pipelines, with an editor that lets you edit existing templates. If you need a new one, it needs to be created outside the built-in editor.

Pipelines can be cloned, so that one that works for a development build can be copied and used as the basis for a production build pipeline or another similar project. This process only brings across the core YAML code. For security reasons, variables and secrets aren’t transferred and need to be recreated for the new pipeline.

In-cloud and on-premises

By offering much the same feature set as its cloud alternative, Azure DevOps Server lets you bring cloud-hosted CI/CD into your data center with minimal disruption, and Microsoft’s new update model should keep the two variants close to in sync. Although running your own instance does mean more work than simply enabling a new Azure service, there’s still enough automation to allow project managers and development leads to manage and run repositories and pipelines without requiring administrative intervention.

The capability to have both source control and CI/CD pipelines on premises is important as it ensures you’re not reliant on outside resources for your application builds. This approach is essential for regulated industries; there are good reasons to keep as much of the build process inside your firewall as possible, as your software is as much the heart of your business as your staff.

(image/jpeg; 6.16 MB)

Key principles of a successful internal developer platform 30 Oct 2025, 2:00 am

Modern software delivery is a story of increasing complexity. As organizations adopt cloud-native technologies, the number of tools, APIs, and processes keeps multiplying. Developers spend more time wrestling with infrastructure than building features, while platform engineering teams struggle to keep systems secure, compliant, and cost‑efficient.

That’s why more and more enterprises are turning to internal developer platforms (IDPs). But what makes a good IDP? Let’s break it down.

Why do you need an IDP?

Think of an unplanned city. Every resident builds their own house, lays their own water pipes, runs their own electricity, and decides their own traffic rules. The result? Chaos: tangled wires, pothole‑ridden streets, constant outages, and no way to keep things safe or efficient. You end up with factories within residential areas, schools in the midst of factories, a jumble of mismatched structures and systems. And the chaos continues.

That’s what software delivery looks like without an IDP:

  • Fragile, snowflake environments.
  • Inconsistent developer experiences.
  • Slow onboarding times.
  • Gaps in security and compliance.

Now compare that with a planned city. Roads, plumbing, electricity, and zoning are built once, shared by everyone, and maintained by city planners. Residents build houses, schools, or shops on top of that orderly foundation, within demarcated neighborhoods.

That’s what an IDP provides:

  • Faster delivery through golden paths.
  • Secure and compliant environments.
  • Governed, monitored processes.
  • Happy developers.

An IDP is the paved road that lets teams focus on what they’re building, not on how it runs.

What should an IDP deliver, and to whom?

An IDP has two primary audiences, developers and platform engineers. Think of developers as the city’s citizens and builders, and platform engineers as the city planners and utility operators.

Developers:

  • Build applications (homes, shops, schools).
  • Rely on infrastructure (roads, power, water, emergency services).
  • Shouldn’t need to worry about how utilities work under the hood.

Platform engineers:

  • Provide paved roads, power lines, zoning codes, and shared services.
  • Don’t build every building but ensure safety, connectivity, and scalability.
  • Use automation and blueprints to replicate working neighborhoods.

The magic of an IDP lies in getting the abstractions right. Developers need only the abstractions relevant to their work (e.g., database type), while platform teams need deeper visibility (e.g., how workloads run across VMs, containers, or serverless).

Think of the abstractions needed for running a well-planned city. A city is made up of:

  • Neighborhoods or districts (residential, urban, financial)
  • Units (houses, schools, shops)
  • Utilities (electricity, water, roads)
  • Connectivity (traffic lights, speed limits, lane directions)

A digital enterprise has similar characteristics. This is where the following four layers of enterprise abstraction emerge. A digital enterprise is made up of:

  • Business domains such as marketing, sales, and customer service. Like a city’s neighborhoods, each domain has its own style, characteristics, and purpose.
  • Business functionalities such as microservices, web applications, and mobile applications within each domain. These are similar to shops, houses, and schools in neighborhoods.
  • Deployment services such as CI/CD, observability, runtime infrastructure, and scaling. Just as a city cannot function without electricity, water, and other utilities, an enterprise needs deployment services to be operational.
  • Middleware such as API gateways, service meshes, ingress, and firewalls. In a city, moving goods and services requires roads, intersections, speed limits, lane directions, and so on. In an enterprise, middleware services provide this connectivity.

By carefully defining these layers, an IDP ensures consistency, safety, and clarity across the enterprise.

10 key principles of a good IDP

So how do you build an IDP that truly delivers? Here are 10 guiding principles.

  1. Separation of concerns
  2. Enterprise portal
  3. Domain-driven design
  4. API-first mindset
  5. Security-first mindset
  6. Universal interface
  7. Self-service
  8. Ops-driven, declarative and automated
  9. Intelligent and insightful
  10. Product orientation

Separation of concerns

There are two types of separation of concerns that come to mind with respect to internal developer platforms, the separation between developers and operators and the separation between the control plane and the data plane.

Developers vs. operators

An IDP should serve both application developers and platform operators. Application developers focus on APIs, applications, databases, etc. Operators worry about infrastructure, scaling, costs, networking, etc. These two personas therefore are very different. Their interests are different, and so should be the abstractions for these roles.

A developer is focused on implementing code for his/her application. Developers write code on their local workstations (laptops), test it out, and commit to the repositories when done. CI/CD kicks in at that point and deploys their changes to a development environment. Developers wouldn’t really care whether this code runs in a VM or container in these environments, but the operators certainly do. What should we think about abstractions in this case then? Since developers wouldn’t typically care about this detail, it should be abstracted out (hidden away). But since this detail is critical for operators, it should not be abstracted away from them, it should be exposed. Another example we could think of is the implementation language and technology stack of the application. Whereas developers need to understand the language and technology stack of their applications, operators may not. After all, these are just workloads of different types. A microservice written in Java vs. Go is the same from an operator point of view; it’s a workload with environment variables. This detail should therefore be abstracted away from operators but exposed to developers.

It becomes clear then that abstractions for developers and operators on an IDP aren’t the same. It is crucial to understand this difference and build your IDP such that the concerns of developers and operators are separated.

Control plane vs. data plane

Another important separation in an IDP is between controls and runtimes. The control plane is similar to a signalling tower. It issues instructions and commands, but doesn’t run workloads. Workloads are run on the data plane. Permissions, scaling, and other operational characteristics of a control plane are therefore very different to that of a data plane. Understanding this difference is critical to having a smoothly operating IDP.

Another point to consider is that application workloads may run on heterogeneous infrastructures in various forms. You might run your development environment in a low-cost data center while QA, staging, and production run in a public cloud. This would mean that the same set of workloads must be operated on different infrastructures. Having one control interface that works with any data plane infrastructure is therefore of paramount importance.

Enterprise portal

An enterprise portal is the front door to an internal developer platform. Developers in organizations end up creating many different digital artifacts. These could be APIs, applications, jobs, databases, caches, queues, and so on. The value of these artifacts are only realized if they are easily discoverable and reused. A lack of discoverability leads to duplication of workloads, zombie assets, wasted spend, and potential security risks.

In addition to discoverability, a portal should provide easy consumption of these artifacts as well. Having self-service access to an artifact’s documentation and convenience mechanisms to connect and consume artifacts are important considerations when building an enterprise portal to an IDP.

An enterprise portal should also provide guardrails for creating artifacts. For example, if a developer wants to create a microservice in Spring Boot, the portal should have the relevant templates that comply with the organization’s rules and regulations that allow him/her to do so. For example, templates should declare allowed versions, folder structures, style guides, and so on. Lack of adherence to the given standards should result in warnings and error reports.

Domain-driven design

Think of an unplanned city where builders are allowed to build anything they want wherever they want to. Not only would you end up with a lot of inefficiencies and inconvenience, but the city would be ugly as well. There would be no consistency, no design, and therefore no beauty in it.

A planned city has neighborhoods. Each neighborhood has its own personality and rules of operations. For example, the speed limits in a school zone would be different form those in urban area. Zoning rules of an urban area would be very different from residential areas. Builders who build infrastructure need to adhere to these rules so that you have a well functioning city.

A digital enterprise is no different. It needs to have its domains, each with its own personalities and operational rules. A customer experience domain would be very different from an operations domain. One would need access to CRMs and other front-office tools, whereas the other would need access to ERPs and other internal systems. The levels of access needed, types of data accessed, etc. would therefore be very different in these respective domains.

An IDP should help align software structure with business processes. It should allow platform engineers to define, tune, and maintain the rules of engagement based on the organizational domains. It should naturally guide developers to create digital artifacts in the right places. Failure to meet these needs will result in inefficiencies, confusion, extra effort, delays, and security risks.

Unfortunately, many IDPs today only focus on reducing cognitive load on developers and increasing delivery velocity. One has to realize that going faster only makes sense if you are headed in the right direction. A well-designed IDP keeps developers on the right track and streamlines the path to the goal.

API-first mindset

Just as the buildings in a city are most useful when connected via roads, digital artifacts in a development organization are most useful when they can be discovered and consumed easily. This means that artifacts must maintain a contractual obligation to its consumers. Every digital artifact therefore needs an API-first mindset behind it.

An API-first mindset means:

  • Treating everything as an API. Everything you create in an organization, such as services, databases, caches, queues, applications, etc., should be treated as an API.
  • Sharing and reuse. Everything should be properly discoverable, properly governed, and accessed using standards. No hidden artifacts or private backdoors.
  • Life-cycle management. All artifacts should be properly versioned and life-cycle managed. Changes should be made on new versions only, adhering to contractual obligations. Older versions should be deprecated and eventually retired.
  • Quality enablers. Everything should be observed and monitored, with feedback incorporated into the roadmaps of those artifacts.

An IDP built on the above principle enforces a natural order on the organization. That order is critical for a safe and efficient environment that fosters innovation at scale.

Security-first mindset

A city’s residents need to feel safe and protected so that they can get along with their day to day tasks with ease. This sense of security does not come from having security guards and military personnel on every streetcorner. You don’t get a sense of safety when your bags are being checked whenever you enter a public place. This form of ensuring safety comes because security is being added as a patchwork, instead of being built into the design of the city. An IDP needs to have security aspects built in, not bolted on as patchwork. That is why having a security-first mindset when building an IDP is important.

When developers create digital artifacts, the IDP must ensure they are secured by default. Developers should not be expected to BYOS (bring your own security). An IDP should offer end-to-end security required for all kinds of artifacts—for example, authentication, authorization, rate limits, encryption of data at rest, encryption of data in transit, role-based access control, prevention against container escapes, and so on.

Governance of digital artifacts in an IDP is also critically important to ensure safety. This involves things like removing unused artifacts, controlling access to public endpoints, controlling usage of third-party libraries and services, and so on.

A good IDP has to be built upon this principle of a security-first mindset. For an organization to be truly safe it needs to adhere to a “zero-trust” mode of operation. But security needs to be built-in, not bolted-on, so that developers get security “for free.”

Universal interface

An IDP must present a single, consistent interface with consistent abstractions. Many IDPs are a patchwork of many tools with different abstractions, usually “linked together” with a single portal. Such a patchwork makes it very hard for developers and platform engineers to use the platform for their needs. To get something done, you need to jump through many tools and their interfaces. To troubleshoot an issue, you need to have a good “guess” of where the issue might be visible, navigate to that tool, and trace back to wherever the root cause might be.

For example, when something goes wrong in a production system, you may need to navigate to Argo CI to check for deployment-related issues, navigate to Datadog for application-related issues, look at Nginx logs for routing-related issues, and so on.

A good IDP should be like the vehicle diagnostic scanner we plug into the on-board diagnostics port of a car, which provides one interface that aggregates the data from many sensors. Imagine how inconvenient it would be to have 20 different tools to diagnose the many different parts of your car. Unfortunately many IDPs today are like that—a collection of different tools linked together with a single portal. A good IDP should be built on this principle of having one consistent set of abstractions and one universal interface.

Self-service

Self service is probably the most popular principle of an IDP. A good IDP must provide self-service capabilities for both application developers and platform engineers.

For application developers, it’s about the ability to get things done without having to create tickets and wait for days for someone else to attend to them. Self service for developers is supported through what we call golden paths or paved roads. A golden path is a pre-defined, opinionated, and supported way of building, deploying, and operating software. A golden path may not be the only way to get something done on the platform, but it certainly is the recommended, curated path of least resistance.

Platform engineers are often ignored when it comes to self service. Usually, they are just expected to build self-service capabilities for app developers but almost never considered as engineers who need to be served by the platform themselves. But as we discussed in the first principle of this article, an IDP should serve platform engineers too. Platform engineers are expected to provide consistent infrastructures, environments, pipelines, and so on. Just like city builders are expected to provide the same voltage of electricity to all parts of a city, the same water pressure to every household, so are platform engineers expected to provide the same consistent foundations for developers to build on.

This consistency can only be achieved via self-service golden paths that are available to platform engineers. Self service for platform engineers means giving the platform team itself a set of automated, composable building blocks that allows them to design, extend, and operate the IDP efficiently without having to manually stitch together infrastructure or re-invent patterns each time. These self-service golden paths need to have the right guardrails built-in (for handling risky actions such as removing environments, for example), as well as audit trails and proper governance at scale.

Self-service golden paths, for both developers and platform engineers are therefore a key principle in an IDP. Characteristics of such golden paths are:

  • Opinionated, not restrictive: They encode best practices (tech stack choices, CI/CD templates, security policies) while leaving flexibility for edge cases.
  • End-to-end workflow: They cover the full life cycle from scaffolding an app, provisioning infrastructure, and CI/CD to observability, monitoring, and incident response.
  • Self-serviceable: They are exposed to developers through self-service tools, UI, or CLI commands in the IDP.
  • Abstract away complexity: Developers and platform engineers don’t need to wire together Kubernetes, observability stacks, IAM, etc. The golden path bakes those in behind easy interfaces.
  • Continuously maintained: Platform engineers evolve golden paths alongside organizational needs, security requirements, and new technologies.

Ops-driven, declarative and automated

Automation (obviously) is critical for an IDP. You cannot achieve the goals of an IDP without automation. But automation without discipline is just a recipe for chaos. That is why ops-driven automation is the way to go. Ops-driven automation is basically about following GitOps workflows for changes made on the IDP. Every action performed on the IDP has to be versioned, recorded, and reversible. All actions need to have audit trails.

It’s important for an IDP’s automations to be in declarative form. This is about declaring the desired state of the system instead of continuously monitoring and reacting to events and alerts. Think of a city’s street lights. Someone needs to turn on the lights at dusk and turn them off at dawn. If something goes wrong in the middle of the night and the lights go off, someone needs to attend to it and turn the lights back on. This is a cumbersome process and requires a lot of labour. However, imagine being able to declare the desired state as the “lights need to be on whenever there’s darkness.” If the system can automatically reconcile the state of the lights to this desired state, the operation of the city’s lights become much more efficient and smooth. No one needs to wake up in the middle of the night just because of a glitch in the system. The system automatically recovers by itself.

For a truly hands-off experience of operating an IDP, the platform’s automations need to work in a declarative manner. Declarative automations with ops-driven workflows are therefore a key principle to build an IDP on.

Intelligent and insightful

An IDP serves many stakeholders. While it may primarily cater to application developers and platform engineers, the benefits of an IDP can be realized by many parts of an organization. To make this possible, the IDP should expose relevant intelligence and insights to all parties. Here are some examples of different stakeholders and the relevant data and insights.

  • For developers and operators: Insights needed for troubleshooting incidents. Primarly driven by observability data (i.e., logs, metrics, traces).
  • For business stakeholders: Insights that showcase the impact of digital artifacts on the business. For example, data such as orders placed, user growth, order cancellations, etc. This basically involves converting technical data from an organization’s APIs to business insights.
  • For engineering managers: Insights needed for assessing the organization’s speed and stability of delivering software. Primarily built on the well-known DORA metrics.
  • For architects: Insights that help determine the ROI of digital artifacts, insights on the efficiency of resources, cost breakdowns, etc.

In our data-intensive era, insights without intelligence are insufficient. For many years, we’ve been accustomed to looking at all kinds of graphs, charts, and reports. We’ve had to undergo the hard task of analyzing these reports to understand areas of improvements. But now, many of these tasks can be offloaded to AI agents within the IDP. In addition to showing graphs, charts, and reports, these agents can help determine the causes of failures and other areas of improvements for our digital artifacts as well.

Intelligence of course applies across the board, not just for insights. An IDP should incorporate AI everywhere it makes sense. Think of compliance, governance, monitoring, etc. AI has become a tool that can assist many such areas of an IDP. It is therefore crucial to consider AI and insights as a key principle of an IDP.

Product orientation

An IDP should not be a one-off project. A project is something you do once and finish. It has a start date and an end date. An IDP is never a finished project. It is something that continues to live and evolve, forever.

Delivery of software never ends. Furthermore, the types of software that are delivered and the ways in which they are delivered inevitably change. What you deliver today is not the same thing that you will deliver tomorrow. If you treat your IDP as a one-off project, you will build for today’s requirements and stop, and your IDP will not cater to the needs of tomorrow. This is why you need a product mindset for your IDP. Your IDP should evolve to meet future needs, keeping pace with the tools and technologies of the modern industry and providing a platform to lift up and modernize your organization.

A product mindset for an IDP requires proper product management. This includes maintaining a clear roadmap, having regular release cadence, life-cycle management of features, issue tracking, and so on. It also requires paying attention to non-technical factors required for its success. You need to create sufficient awareness around the platform, increase its adoption, gather feedback from users, feed those learnings into the roadmap, and continue to iterate.

This product mindset is therefore a key principle of an IDP. It is critical for long-term success. Treating an IDP as a project will give you short-term benefits but eventually fail in the long term. Strong product management with a real commitment to evolve the IDP like a product is what will guarantee its overall success.

Closing thoughts

A great IDP is more than a collection of tools. It’s your “planned city” for software delivery, providing consistent abstractions, reliable guardrails, and golden paths that empower both developers and platform engineers.

Many IDPs, both home-grown and off-the-shelf solutions, tend to focus only on reducing the cognitive load of developers and delivering software faster. While this approach may deliver short-term wins, it creates inefficiencies and extra toil in the long run.

A successful IDP removes barriers to efficiency and puts both developers and platform engineers on self-service golden paths. It creates order, saves time, saves money, increases satisfaction, and significantly improves an organization’s ability to innovate.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

(image/jpeg; 11.36 MB)

Google adds tiered storage to NoSQL Bigtable to reduce complexity, costs 30 Oct 2025, 12:20 am

Google has added a fully managed tiered storage capability inside its NoSQL database service Bigtable to help enterprises reduce the complexity and expenditure of storing data.

The new feature works by automatically moving less frequently accessed data from high-performance SSDs to infrequent access storage while allowing applications or queries to still access the less-used data, thereby lowering costs but maintaining access, the company explained in a blog post.  

Cloud storage tiering is hardly a novel concept for reducing costs, but what analysts expect will appeal to enterprises is storage tiering directly inside a database.

“Enterprises relying heavily on high-speed SSDs often face steep costs, so they move infrequently accessed data to cheaper media. The trade-off has traditionally been complexity and latency, as accessing cold data can require switching systems and waiting through delays. What Google is now doing is eliminating those hurdles by making both hot and cold data accessible through the same database,” said Bradley Shimmin, lead of the data intelligence, analytics, and infrastructure practice at The Futurum Group.

The analyst was referring to storage products offered by nearly all hyperscalers: Google’s Cloud Storage, Amazon’s S3, and Azure’s Blob Storage, all of which provide frequent (hot), infrequent (cold), and archive tiers while integrating with their respective database offerings.

“In these integrations, the database offloads cold data to this external system. Enterprises often have to manage two separate systems, deal with data movement pipelines, and potentially use different query methods for hot vs cold data,” Shimmin said.

The other challenge of these integrations, according to analysts, is the added cost of data retrieval from cold and archive tiers: Google itself charges $0.02 per GB and $0.05 per GB for retrieving data from cold and archive storage tiers, respectively, over and above operation and network charges.

AWS and Azure, too, have data retrieval charges, with the former offering automated tiering for an additional fee without retrieval charges as an option, and the latter offering an archiving tier.

Handy for agentic workloads

Further, analysts pointed out that the new capability will also help enterprises rein in costs, specifically those adopting AI workloads, particularly agentic, which is currently witnessing exponential proliferation.

“The new capability could have significant ramifications for the agentic era of AI, wherein we find ourselves generating tremendous amounts of data, such as vector indexes, which can get out of hand pretty quickly and force companies to prioritize only frequently accessed or updated vector embeddings to reduce costs,” Shimmin said.

However, with the new capability, enterprises can explore the use of new representations of data (vector embeddings, context logs, etc.) that drive AI consumption, Shimmin added.

Seconding Shimmin, HyperFRAME Research’s practice leader of AI Stack, Stephanie Walter, pointed out that the capability gives enterprises a “pragmatic” option to scale vector-heavy workloads without paying SSD prices for everything, leading to friendlier unit economics.

Automated storage tiering is also an option in Google’s distributed database Spanner, and the capability was introduced earlier this year in March.

(image/jpeg; 21.15 MB)

Cursor 2.0 adds coding model, UI for parallel agents 29 Oct 2025, 7:22 pm

Anysphere has introduced Cursor 2.0, an update to the AI coding assistant that features the tool’s first coding model, called Composer, and an interface for working with many agents in parallel.

Both Cursor 2.0 and Composer were introduced October 29 by the Cursor team at Anysphere. Cursor is a fork of Microsoft’s popular Visual Studio Code editor, downloadable at cursor.com for Windows, MacOS, and Linux.

Composer is a frontier model that is four times faster than similarly intelligent agent models, the Cursor team said. Built for low-latency agentic coding in Cursor, Composer completes most turns in fewer than 30 seconds, according to the team’s own benchmarks. A mixture-of-experts language model that supports long-context generation and understanding, Composer is specialized for software engineering through reinforcement learning in a diverse range of development environments, the Cursor team said. The model was trained with a set of tools including codebase-wide semantic search, which makes it better at understanding and working in large code bases, they added.

Cursor’s new multi-agent interface, meanwhile, was designed to be centered around agents rather than files, the Cursor team said. Cursor 2.0 makes it easy to run many agents in parallel without them interfering with each other, powered by git worktrees or remote machines. The team said it has found that having multiple models attempt the same problem and picking the best result significantly improves the final output, particularly for harder tasks.

Cursor 2.0 also is intended to streamline the “bottlenecks” of reviewing code and testing changes when working with agents, making it easier to quickly review changes an agent has made and to dive deeper into code when needed. The addition of a native browser tool allows Cursor to test its work and iterate until the correct final result has been produced.

(image/jpeg; 6.37 MB)

GitHub launches Agent HQ to bring order to AI-powered coding 29 Oct 2025, 2:57 am

GitHub is taking a major step toward redefining enterprise software development with the launch of Agent HQ, a platform that lets developers manage and orchestrate multiple AI coding agents from OpenAI, Anthropic, Google, and others directly within the GitHub environment.

The move suggests a new phase in enterprise AI adoption, as organizations look to govern, audit, and scale AI-driven coding within their existing DevOps workflows instead of using separate tools.

The new platform introduces centralized mission control, code quality monitoring, and governance features that give CIOs and development leaders greater visibility into how AI contributes to code creation, review, and deployment across their organizations.

“It extends to VS Code with new ways to plan and customize agent behavior,” GitHub said in a blog post. “And it is backed by enterprise-grade functionality: a new generation of agentic code review, a dedicated control plane to govern AI access and agent behavior, and a metrics dashboard to understand the impact of AI on your work.” 

GitHub said that the real “power” of Agent HQ comes from the mission control, which provides a consistent interface across GitHub, VS Code, mobile, and the CLI, allowing users to direct, monitor, and manage every AI-driven task.

Developers can also create custom agents in VS Code using configuration files that define project-specific rules and coding standards, giving enterprises finer control over how AI operates within their workflows.

The platform extends its reach through integrations with tools such as Slack, Jira, Microsoft Teams, and Azure Boards, positioning GitHub as a central hub for AI-driven collaboration across enterprise software teams.

Orchestrating the future of AI coding

Analysts say GitHub’s latest initiative positions the company as a key orchestration layer for the next generation of AI-powered development tools. Rather than adding yet another standalone coding agent, GitHub is attempting to unify them under a common governance and workflow model.

According to IDC, developers spend only about 16% of their time actually writing new code, with the rest consumed by operational, background, or maintenance tasks. Tools powered by generative and agentic AI are seen as a major lever to improve productivity by automating routine work and enabling developers to focus on higher-value tasks.

“With too many players in the AI space coming up, it becomes difficult for developers to switch between multiple tools and agents,” said Sharath Srinivasamurthy, associate vice president of research at IDC. “Most enterprises have multiple developer (and agentic) platforms, and it complicates the lives of developers. In this regard, Agent HQ will act as a single source for all agentic AI coding tools.”

The consolidation of agents within GitHub also opens new flexibility for enterprises. It allows them to mix and match agents based on task specialization, performance, or cost, creating a more open and adaptable ecosystem.

Such interoperability could weaken traditional vendor lock-in models and shift market power toward platforms that prioritize orchestration over exclusivity.

“This architecture preserves GitHub’s core primitives (e.g., Git, pull requests, CI/CD) while enabling diverse agents to collaborate seamlessly under a common governance model,” said Biswajeet Mahapatra, principal analyst at Forrester. “By supporting multi-agent interoperability and avoiding proprietary silos, Agent HQ reduces dependence on any single vendor.”

Others noted that the broader AI ecosystem is now racing to build frameworks for agents to interoperate, which could create fragmentation as enterprises weigh which framework to adopt.

“GitHub’s Agent HQ potentially solves this for DevOps really well, managing a complex, multi-agent fleet with strong governance and policy framework with auditing and metrics dashboard,” said Neil Shah, VP for research at Counterpoint Research. “This could reshape the DevOps practices from automated planning to evaluation of AI-generated code, CI/CD pipelines, and security guardrails.”

Governance and compliance

Agent HQ arrives at a time when CIOs are grappling with growing governance and compliance challenges as AI agents become deeply embedded in enterprise software workflows. The rapid adoption of generative and agentic AI has expanded capabilities, but it has also introduced new layers of complexity in oversight and security.

Srinivasamurthy noted that while many organizations are investing heavily in AI, few have the maturity to manage and govern these systems effectively at scale. “Only around 8% of enterprises are ready to govern agentic AI at scale,” he said.

“As multiple AI agents proliferate, CIOs could face challenges similar to past SaaS governance issues, including fragmented interfaces, inconsistent behaviors, and overlapping permissions,” Mahapatra said. “Agentic AI systems also tend to lack clear traceability for decisions and actions, making compliance and accountability more complex.”

The growing autonomy of these agents introduces additional risks, effectively creating “digital insiders” with varying levels of privilege and access. GitHub’s centralized control plane, which includes identity management, audit logging, and policy enforcement, may help CIOs establish unified governance hubs for managing AI agents across teams and projects.

“Platforms like GitHub are also integrating agentic workflows with enterprise-grade security and compliance capabilities, making it easier for organizations to align with standards such as the NIST AI Risk Management Framework and the EU AI Act,” Mahapatra said.

(image/jpeg; 6.32 MB)

The quiet glory of REST and JSON 29 Oct 2025, 2:00 am

I don’t expect that many developers today fully appreciate the quiet glory that is REST and JSON.

But then, most developers today have not been around the software business as long as I have. It’s only natural that we old timers have a clearer idea of how incredibly far the technology has come.

I’ve been writing code long enough to remember when computers had 5¼-inch floppy drives and exactly zero network cards. Connectivity was a 2400 baud modem talking to a local BBS via the plain old telephone system. The notion of two computers talking to each other was conceivable—but just the two. The Internet was just a twinkle in the eyes of a few DARPA engineers.  

Back then, getting two computers to exchange text and basic file data was pretty well established. But getting one of those computers to execute some code and pass the result back? Well, that was really, really challenging. Today, we do it with the REST/JSON combination. It’s amazing, really.

DCOM and CORBA

Early efforts at developing remote computing were crude by today’s standards. Windows was the dominant operating system. To allow Windows software to communicate over a network, Microsoft developed DCOM, the Distributed Component Object Model. DCOM handled data transformation, security, and network transport under the hood, allowing computers to execute remote code and pass the results back and forth. But it was notoriously complex (see marshalling). And of course, it only worked on a local Windows network.

Early attempts at making remote computing work across language and network boundaries revolved around CORBA, the Common Object Request Broker Architecture. CORBA provided a means for code written in different languages and running on different machines to act as if they were local objects. You had to use an Object Request Broker (ORB) and the Interface Definition Language (IDL) to get everything to work. And yes, like DCOM, CORBA was complex, brittle, and challenging to keep up and running. On top of all that, CORBA was expensive and implemented by the big tech companies.

In short, both DCOM and CORBA tried to make remote calls feel local. And both collapsed under their own weight.

SOAP and WSDL

The first attempt at abstracting away much of the complexity of getting one computer to execute code for another was SOAP, the Simple Object Access Protocol. SOAP advanced the cause by leveraging HTTP as the network protocol and XML as the communication medium. SOAP used an XML-based description language called WSDL (Web Services Description Language) to define the structure and behavior of web services.

SOAP was a step in the right direction, but it too required pretty much everything to be specified ahead of time. Data and objects had to be structured and defined before any communication could happen. It looked promising at first—XML over HTTP!—but developers found themselves buried under layers of WSDL, namespaces, and rigid schemas. Each small change in an API meant regenerating client stubs, and debugging was onerous. A missing XML namespace could consume an entire afternoon.

SOAP was, however, a solid step forward. And the lessons learned from SOAP helped pave the way to REST and JSON finally becoming the standard communication protocol.

There were other protocols along the way, such as Java’s RMI (Remote Method Invocation) and XML-RPC, which met specific needs, but the progression from DCOM to CORBA to SOAP/XML to REST/JSON is the main highway that led us to where we are now.

Revel in the glory

I imagine that many developers today don’t properly appreciate the glory that is REST/JSON because it is such an elegant and beautiful solution. In 2000, it was Roy Fielding who had a “light bulb over the head” moment and saw the connection between standard CRUD operations and the GET, POST, PUT, and DELETE verbs of the HTTP protocol. His lovely insight opened our eyes to the notion that the web was more than a platform for serving documents. The web was, in and of itself, a giant computing platform.

Just like that, all of the marshalling and crazy protocols like DCOM, CORBA, and even SOAP were abstracted away. Today, REST rides along on a system that nearly every single computer in the world already can do. Security? Well, good old SSL/TLS will do the trick. And by leveraging Douglas Crockford’s very flexible and powerful JSON, or JavaScript Object Notation, nearly every difficulty and complexity in moving objects and code between computers and operating systems vanishes in a puff of smoke. REST made remote procedure calls as universal, scalable, and programming language-agnostic as the web itself. JSON took care of the rest.

Today, using REST/JSON is about as familiar to developers as breathing. Practically every library, programming language, and DBMS in the world supports REST/JSON and consumes it natively and naturally. REST/JSON could easily be viewed as the lifeblood of the web today. 

So next time you curl an endpoint and watch a neat little JSON blob pop out, remember—this used to take days of configuration, COM registration, and XML misery. REST/JSON didn’t just simplify distributed computing, it democratized it. Take a moment and revel in the glory that is the REST protocol and the JSON universal data format. And appreciate the struggle it took to arrive at such an elegant and simple solution.

(image/jpeg; 8.49 MB)

The top 4 JVM languages and why developers love them 29 Oct 2025, 2:00 am

The Java virtual machine provides a high-performance, universal runtime for a wealth of popular languages beyond just Java. In this article, we’ll look at the characteristic strengths and common use cases of four of the most popular JVM languages: Kotlin, Scala, Groovy, and Clojure.

Kotlin

Kotlin is a modern language that has seen a groundswell of developer enthusiasm over the last few years. This popularity is thanks in large part to its highly expressive syntax, which includes object-oriented and functional programming support, but it doesn’t stop there. Kotlin is interoperable with Java, and it includes multiplatform tooling and cross-language compilation. Like other JVM languages, you can use GraalVM to compile Kotlin to native binaries for highly optimized deployment with excellent start, stop, and runtime resource use.

In 2019, Google identified Kotlin as the preferred language for Android development, a vote of confidence that turbo-boosted its popularity with developers.

Another factor in Kotlin’s strength is its backing by JetBrains, the creator of the IntelliJ IDE. JetBrains has consistently maintained and refined Kotlin. That investment has ensured Kotlin’s stability while keeping it on the leading edge of innovation, both qualities developers appreciate.

Because it is 100% interoperable with Java, Java developers and organizations can adopt Kotlin gradually. It is easy for a Java developer to get comfortable with Kotlin, and vice versa. It is also not hard to hold both languages in your head. For experienced Java developers, Kotlin feels like an expanded version of Java. And even if you don’t know Java, you can still become an expert in Kotlin.

Kotlin obviously shines for use on Android, but it’s also popular in other areas, including server-side development. Kotlin is well-suited to developing DSLs (domain-specific languages). One of these, the Kotlin HTML DSL, is a powerful, built-in server-side templating language for the web.

One of Kotlin’s best-known assets is its null safety feature, which enables minimizing the occurrence of NullPointerExceptions. Standard types like String cannot be initialized null, unless you explicitly allow it using the nullable modifier (String?). When using nullable types, the compiler disallows access without a safety check. Kotlin also gives you the null-safe dot operator (?.), which is similar to the optional chain operator in JavaScript. Here’s a look at Kotlin using the ?: operator to provide a default value when checking:

val length = middleName?.length ?: 0

In this example, if middleName is null, length will be set to 0.

Another killer feature is coroutines, which provides a structured way to manage concurrent operations. Kotlin’s coroutines are inspired by Go’s goroutines, and also were an inspiration for Java’s new structured concurrency model. This example shows how a Kotlin coroutine can be used to provide synchronous syntax for asynchronous logic:

import kotlinx.coroutines.*

fun main() = runBlocking { // main coroutine
    // Launch a new coroutine
    launch {
        delay(1000L)       // suspend for 1 second
        print("InfoWorld!")  // Print after delay
    }

    print("Hello,")      // The main coroutine continues 
}

We’ve only scratched the surface of Kotlin’s abilities, but these examples should give you an idea of why it’s become so popular with developers. As a mainline language, Kotlin has vastly increased the power and reach of the JVM.

Also see: Kotlin for Java developers.

Scala

Scala differentiates itself from other JVM languages by making functional programming foundational and implementing it rigorously. As a result, developers who prefer functional programming and want to leverage the JVM often turn to Scala. Although it’s not emphasized, Scala also has strong support for object-oriented programming.

Scala is very popular for large-scale, high-throughput, realtime data processing. It is the language of Apache Spark, the distributed platform for big data streaming, batching, analytics, machine learning, and more. Spark’s extensive and excellent use of Scala’s ability to tie together streams of events with functional operators is another powerful driver for Scala adoption.

Pattern matching is one of Scala’s most popular functional programming features. Here’s an example of Scala’s switch-like syntax for flow control:

case class Message(sender: String, body: String)

val notification: Any = Message("Ada Lovelace", "Hello, InfoWorld!")

notification match {
  case Message(sender, body) => println(s"Message from $sender: $body")
  case "Ping"                => println("Received a Ping")
  case _                     => println("Unknown notification type")
}

This provides a branch if notification is a message type and allows us to define a function that receives the properties of that message. If notification is a String containing “Ping”, it goes to the second case, and the underscore character defines the default. The beauty of this construct is that it all happens within the functional programming paradigm.

Scala also emphasizes immutability, another tenet of functional programming. Immutability makes for simpler software that is less prone to errors. In Scala, the main variable declaration keyword is val, which is a constant, and built-in collections like List, Vector, and Map are all immutable. You modify the collections using functional operations like filter, which create new collections.

Scala is also very strong in concurrency, employing actors in a powerful, reactive-style programming system. Scala’s actor model forms the basis of the renowned Akka framework, a set of libraries for multithreaded, distributed computing.

Scala also has a sophisticated type system that supports advanced use cases. Here’s an example of the trait type, which combines an abstract class and interface. The trait type allows classes to descend from multiple ancestors with both abstract and concrete members:

trait Speaker {
  def speak(): String 
  
  def announce(message: String): Unit = { 
    println(message)
  }
}

class Dog extends Speaker {
  override def speak(): String = "Woof!"
}

class Person(name: String) extends Speaker {
  override def speak(): String = s"Hello, my name is $name."
}

@main def main(): Unit = {
  val sparky = new Dog()
  val ada = new Person("Ada")

  println(s"The dog says: ${sparky.speak()}") 

  println(s"The person says: ${ada.speak()}") 

  ada.announce("I am learning about traits!") 
}

Notice that the Speaker trait has both concrete and abstract methods, and classes that extend it can extend more than one trait, which is not possible with an abstract class.

There is more to Scala, of course, but these examples give you a taste of it.

Groovy

Groovy is the original JVM alternative. It is a highly dynamic scripting language popular for its simple, low-formality syntax. It’s the language of the ubiquitous Gradle build manager, and is often used as a glue language, or when an application needs customizable extension points. It is also well-regarded for its ability to define DSLs.

For developers coming from Java, Groovy feels like a version of Java that has some of the boilerplate and formality removed. Groovy is in the main a superset of Java, meaning most Java is also valid Groovy.

Groovy is also the language of the Spock test framework.

Groovy dispenses with the “unnecessary” semicolons, and it automatically provides undeclared variables for scripts (known as script binding). This is especially handy for application extensions and DSLs, where the host language (particularly Java) creates a context for the Groovy script and users can create functionality without declaring variables.

This example offers a taste of Groovy’s streamlined flavor:

def list = [1, 2, 3, 4, 5]

def doubled = list.collect { it * 2 }
println("Doubled: " + doubled) //-> Doubled: [2, 4, 6, 8, 10]

def evens = list.findAll { it % 2 == 0 }
println("Evens: " + evens) //-> Evens: [2, 4]

Here, you can see Groovy’s low-formality collection handling, which is based on functional programming.

Another of Groovy’s popular features is its dynamic, optional typing. You can declare a variables type, but you don’t have to. If you don’t declare the variable type, Groovy will manage the variable based on how it is being used, a technique known as ducktyping. (JavaScript has a similar operation.)

Finally, Groovy supports metaprogramming, which is something like a more powerful version of the Java reflection API.

Clojure

Last but not least, Clojure is a descendent of Lisp, a foundational language used in machine learning and symbolic processing. Lisp has influenced many languages and holds a special place for language buffs, thanks to its unique blend of expressive yet simple syntax and “code as data” philosophy.

Code as data, also known as homoiconicity, means the code is represented as data structures in the language. This opens up metaprogramming opportunities because the code representation can be loaded and manipulated directly as software.

Code as data also creates possibilities for powerful macros, where the macro understands the code syntax it expands. This approach to macros is different from languages like C, where macros are simple text, often leading to sneaky errors.

Here’s a simple function in Clojure’s Lisp-like syntax:

;; Comments in Clojure use double semi-colons
(defn greet [name]
  (str "Hello, " name "!"))

The parenthetically enclosed blocks you see are a feature of the code also being data structures. Parentheses denote a collection (a list) and functions are defined and called using a list (e.g., keywords, function names, arguments).

Clojure is also known for its strong concurrency model, being built from the ground up to simplify state management across multiple threads. Clojure’s focus on immutability and excellent support for managed state transitions make it a well-rounded concurrent language. It focuses on immutability instead of orchestrating mutable state between threads, which would leave room for errors. Clojure also includes a reactive agent model for dealing with mutable state and concurrency.

Clojure is a highly structured and refined language. It is rigorously functional in its philosophy and delivers a significant power to the developer. These qualities in Clojure’s design and execution have made it a well-respected choice among programmers.

Conclusion

The four languages described here are the stars of the JVM alternative languages universe, but there are many others. In particular, there are JVM versions of mainstream languages, such as jRuby and Jython.

Kotlin has become a full-blown mainstream language in its own right and has recently entered the Tiobe top 20. But all four languages bring strengths in particular areas. And they all demonstrate the power of the JVM itself.

Here’s a look at the high-level characteristics of the four languages:

LanguageParadigmLearning curveKiller use caseCore values
KotlinOOP, functional (pragmatic)EasyAndroid AppsPragmatism, safety
ScalaFunctional, OOP (rigorous)ModerateBig data (Spark)Type safety, scalability
ClojureFunctional (Lisp)HardData-centric APIsSimplicity, immutability
GroovyDynamic, scriptingEasyBuilds (Gradle)Flexibility, scripting

(image/jpeg; 13.84 MB)

What’s the Go language really good for? 29 Oct 2025, 2:00 am

Over its more than 15 years in the wild, Google’s Go programming language has evolved from a curiosity for alpha geeks to the battle-tested programming language behind some of the world’s most important cloud-native software projects.

If you’ve ever wondered why Go is the language of choice for projects like Docker and Kubernetes, this article is for you. We’ll discuss Go’s defining characteristics and how it differs from other programming languages. You will also learn what kinds of projects Go is best suited for, including the state of Go development for AI-powered tools. We’ll conclude with an overview of Go’s feature set, some limitations of the language, and where it may be going from here.

Also see: Golang tutorial: Get started with the Go language.

Go is small and simple

Go, or Golang as it’s often called, was created by Google employees—chiefly longtime Unix guru and Google distinguished engineer Rob Pike—but it’s not strictly speaking a “Google project.” Rather, Go is a community-developed open source project, spearheaded by leadership with strong opinions about how Go should be used and the direction the language should take.

Go is meant to be easy to learn and straightforward to use, with syntax that is simple to read and understand. Go does not have a large feature set, especially when compared to languages like C++. Go’s syntax is reminiscent of C, making it relatively easy for longtime C developers to learn. That said, many features of Go, especially its concurrency and functional programming features, harken back to languages like Erlang.

As a C-like language for building and maintaining cross-platform enterprise applications of all sorts, Go has much in common with Java. And as a means for enabling rapid development of code that might run anywhere, you could draw a parallel between Go and Python, though the differences outweigh the similarities.

The Go documentation describes Go as “a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.” Even a large Go program will compile in a matter of seconds. Plus, Go avoids much of the overhead of C-style include files and libraries.

Advantages of the Go language

Go is a versatile, convenient, fast, portable, interoperable, and widely supported modern language. These characteristics have helped to make it a top choice for large-scale development projects. Let’s look more closely at each of these positive qualities of Go.

Go is versatile and convenient

Go has been compared to interpreted languages like Python in its ability to satisfy many common programming needs. Some of this functionality is built into the language itself, such as goroutines for concurrency and thread-like behavior, while additional capabilities are available in Go standard library packages, like the http package. Like Python, Go provides automatic memory management capabilities including garbage collection.

Unlike interpreted languages, however, Go code compiles to a fast-running native binary. And unlike C or C++, Go compiles extremely fast—fast enough to make working with Go feel more like working with an interpreted language than a compiled one. Further, the Go build system is less complex than those of other compiled languages. It takes few steps and little bookkeeping to build and run a Go project.

Go is faster than many other languages

Go binaries run more slowly than their C counterparts, but the difference in speed is negligible for most applications. Go performance is as good as C for the vast majority of work, and generally much faster than other languages known for speed of development—including JavaScript, Python, and Ruby.

Go is portable and interoperable

Executables created with the Go toolchain can stand alone, with no default external dependencies. The Go toolchain is available for a wide variety of operating systems and hardware platforms, and can be used to compile binaries across platforms. What’s more, Go delivers all of the above without sacrificing access to the underlying system. Go programs can talk to external C libraries or make native system calls. In Docker, for instance, Go interacts with low-level Linux functions, cgroups, and namespaces to work container magic.

Go is widely supported

The Go toolchain is freely available as a Linux, macOS, or Windows binary, or as a Docker container. Go is included by default in many popular Linux distributions, such as Red Hat Enterprise Linux and Fedora, making it somewhat easier to deploy Go source to those platforms. Support for Go is also strong across many third-party development environments, from Microsoft’s Visual Studio Code to ActiveState’s Komodo IDE.

Also see: 8 reasons developers love Go—and 8 reasons they don’t.

Optimal use cases for the Go language

No language is suited to every job, but some languages are suited to more jobs than others. Go shines brightest in cloud-native development projects, distributed network services, and for developing utilities and stand-alone tools. Let’s consider the qualities that make Go especially well-suited to each of these project types.

Cloud-native development

Go’s concurrency and networking features, and its high degree of portability, make it well-suited for building cloud-native apps. In fact, Go was used to build several cornerstones of cloud-native computing including Docker, Kubernetes, and Istio.

Distributed network services

Network applications live and die by concurrency, and Go’s native concurrency features—goroutines and channels, mainly—are well suited for such work. Consequently, many Go projects are for networking, distributed functions, and cloud services. These include APIs, web servers, Kubernetes-ready frameworks for microservices, and much more.

Utilities and standalone tools

Go programs compile to binaries with minimal external dependencies. That makes them ideally suited to creating utilities and other tools, because they launch quickly and can be readily packaged up for redistribution. One example is an access server called Teleport, which can be deployed on servers quickly by compiling it from source or downloading a prebuilt binary.

Limitations of the Go language

Now let’s consider some of the limitations of Go. For one, it omits many language features developers may desire. It also packs everything into its binaries, so Go programs can be large. Furthermore, Go’s garbage collection mechanism delivers automatic memory management at the cost of absolute performance. The language also lacks a standard toolkit for building GUIs, and it is unsuited to systems programming.

Let’s look at each of these issues in detail.

Go omits many desirable language features

Go’s opinionated set of features draws both praise and criticism. Go is designed to err on the side of being small and easy to understand, with certain features deliberately omitted. The result is that some features that are commonplace in other languages simply aren’t available in Go. This is purposeful, but it’s still a drawback for some types of projects.

One thing Go omits that you will find in other languages is macros, commonly defined as the ability to generate program code at compile time. C, C++, and (the rising star) Rust all have macro systems. Go does not have macros, or at least not of the same variety as those languages. What Go does have is a tool command, go generate, which looks for magic comments in Go source and executes them. This can be used to generate Go source code, or even run other commands, but its main use is to programmatically generate code, usually as a precursor to the build process. (Technical blogger Eli Bendersky explains the ‘go generate’ command in detail.)

Another longstanding complaint with Go was, until recently, the lack of generic functions, which allow a function to accept many different types of variables. Go’s development team held out against adding generics to the language for many years because they wanted a syntax and set of behaviors that complemented the rest of Go. But as of Go 1.18, released in early 2022, the language includes a syntax for generics. Because go generate and its code-generation abilities emerged as one possible way to partially address the lack of generics, this functionality is no longer as commonly used in Go.

The fact is that Go adds major language features rarely, and only after much consideration. This works to preserve broad compatibility across versions, but it comes at the cost of slower innovation.

Also see: What you need to know about Go, Rust, and Zig.

Go’s binaries are large

Another potential downside to Go is the size of the generated binaries. Go binaries are statically compiled by default, meaning that everything needed at runtime is included in the binary image. This approach simplifies the build and deployment process, but at the cost of a simple “Hello, world!” weighing in at around 1.5MB on 64-bit Windows. The Go team has been working to reduce the size of those binaries with each successive release. It is also possible to shrink Go binaries with compression or by removing Go’s debug information. This last option may work better for standalone distributed apps than for cloud or network services, where having debug information is useful if a service fails in place.

Go’s garbage collection is resource hungry

Yet another touted feature of Go, automatic memory management, can be seen as a drawback, as garbage collection requires a certain amount of processing overhead. By design, Go doesn’t provide manual memory management, and garbage collection in Go has been criticized for not dealing well with the kinds of memory loads that appear in enterprise applications.

That said, each new version of Go seems to improve the memory management features. For example, Go 1.8 brought significantly shorter lag times for garbage collection, and Go 1.25 introduced a new, experimental garbage collector. While Go developers can use manual memory allocation in a C extension, or by way of a third-party manual memory management library, most prefer native solutions.

Go doesn’t have a standard GUI toolkit

Most Go applications are command-line tools or network services. That said, various projects are working to bring rich GUIs for Go applications. There are bindings for the GTK and GTK3 frameworks. Another project is intended to provide platform-native UIs across platforms, although it focuses on Go 1.24 forward only. But no clear winner or safe long-term bet has emerged in this space. Also, because Go is platform-independent by design, it is unlikely any project in this vein will become a part of the standard package set.

You shouldn’t use Go for systems programming

Finally, although Go can talk to native system functions, it was not designed for developing low-level system components such as kernels, device drivers, or embedded systems. After all, the Go runtime and the garbage collector for Go applications are dependent on the underlying operating system. (Developers interested in a cutting-edge language for that kind of work might look into using Rust.)

The future of the Go language

Go’s development is turning more toward the wants and needs of its developer base, with Go’s minders changing the language to better accommodate this audience rather than leading by stubborn example. A case in point is generics, which were finally added to the language after much deliberation about the best way to do so.

The 2024 Go Developer Survey found developers were overall satisfied with Go. Challenges that surfaced were generally due to the verbosity of error handling, missing or immature frameworks, and using Go’s type system—areas ripe for future development.

Like most languages, Go has gravitated to a core set of use cases over time, finding its niche in network services. In the future, Go is likely to continue expanding its hold there. Other use cases cited in the developer survey include creating APIs or RPC services (74% of respondents), followed by CLI applications (63%), web services (45%), libraries/frameworks (44%), automation (39%), and data processing (37%). While only 4% of respondents mentioned using Go to develop AI technologies, those who did reported that Go was a strong platform for running AI-powered workloads in production. For those wanting to develop ML/AI with Go, lack of tooling (23%) and the fact that Python is the default choice for such work (16%) topped the reasons why.

It remains to be seen how far Go’s speed and development simplicity will take it into other use cases, especially those dominated by other languages and their existing use cases. Rust covers safe and fast systems programming (a space Go is unlikely to enter); Python is still a common default for ML/AI, prototyping, automation, and glue code; and Java remains a stalwart for enterprise applications.

But Go’s future as a major programming language is already assured—certainly in the cloud, where the speed and simplicity of Go ease the development of scalable infrastructure that can be maintained over the long run.

Also see: Go language evolving for future hardware, AI workloads.

(image/jpeg; 7.46 MB)

Eclipse LMOS AI platform integrates Agent Definition Language 28 Oct 2025, 5:14 pm

The Eclipse Foundation has introduced ADL (Agent Definition Language) functionality to its LMOS (Language Models Operating System) AI project. The announcement came on October 28.

The goal of Eclipse LMOS is to create an open platform where AI agents can be developed and integrated across networks and ecosystems, according to Eclipse. Built on standards like Kubernetes, LMOS is in production with Deutsche Telekom, one of the largest enterprise agentic AI deployments in Europe, Eclipse said. ADL, meanwhile, addresses the complexity of traditional prompt engineering. ADL provides a structured, model-agnostic framework enabling engineering teams and businesses to co-define agent behavior in a consistent, maintainable way, Eclipse said. This shared language increases the reliability and scalability of agentic use cases; enterprises can design and govern complex agentic systems with confidence, according to Eclipse. This capability further distinguishes Eclipse LMOS from proprietary alternatives, Eclipse added. “With Eclipse LMOS and ADL, we’re delivering a powerful, open platform that any organization can use to build scalable, intelligent, and transparent agentic systems,” Eclipse Executive Director Mike Milinkovich said in a statement.

Eclipse LMOS is designed to let enterprise IT teams leverage existing infrastructure, skills, and devops practices, Eclipse said. Running on technologies including Kubernetes, Istio, and JVM-based applications, LMOS integrates into enterprise environments, accelerating adoption while protecting prior investments. ADL’s introduction empowers non-technical users to shape agent behavior; business domain agents, instead of just engineers, can directly encode requirements into agents, accelerating time-to-market and ensuring that agent behavior accurately reflects real-world domain knowledge, the foundation said. In addition to Eclipse LMOS ADL, LMOS is composed of two other core components:

  • Eclipse LMOS ARC Agent Framework: A JVM-native framework with a Kotlin runtime for developing, testing, and extending AI agents comes with a built-in visual interface for quick iterations and debugging.
  • Eclipse LMOS Platform: An open, vendor-neutral orchestration layer for agent lifecycle management, discovery, semantic routing, and observability, currently in the alpha state of development.

(image/jpeg; 1.59 MB)

TypeScript rises to the top on GitHub 28 Oct 2025, 3:12 pm

TypeScript, Microsoft’s strongly typed JavaScript variant, has become the most-used language on GitHub, according to GitHub’s Octoverse 2025 report released on October 28.

August 2025 marked the first time TypeScript emerged as the most-used language on GitHub, overtaking both JavaScript and Python, said the report. The rise of TypeScript illustrates the shift toward using typed languages, which can make agent-assisted coding more reliable in production, according to GitHub. Furthermore, most major front-end frameworks are now scaffolding with TypeScript by default, said the report. And, while Python remains dominant for AI and data science workloads, the JavaScript/TypeScript ecosystem accounts for more overall development activity.

TypeScript’s rise was one of three key shifts cited in the report, and all three were related to AI. The report also noted the use of generative AI tools is now was standard in development, with more than 1.1 million public repositories now using an LLM SDK, and 693,867 of these projects built in the past 12 months. Developers merged a record 518.7M pull requests in the 2025 timeframe (a 29% increase year over year) and AI adoption accelerated, with 80% of new developers on GitHub using Copilot within their first week.

The third shift is in the way that AI is reshaping developer choice, not just code. In the past, developer choice referred to choosing an IDE, language, or framework. In the present timeframe, that has changed. GitHub now sees a correlation between the rapid adoption of AI tools and evolving language preferences. This and other shifts suggest AI now influences not only how fast code is written, but which languages and tools developers use. As the report states, “Agents are here. Early signals in our data are starting to show their impact, but ultimately point to one key thing: we’re just getting started and we expect far greater activity in the months and years ahead,” the company said.

The GitHub report cited more than 4.3 million AI-related repositories, noting record-level activity across repositories. Developers created more than 230 new repositories every minute, merged 43.2 million pull requests on average each month (+23% year over year), and pushed nearly 1 billion commits in 2025 (+25.1% year over year)—including a record of nearly 100 million commits in August alone.

GitHub’s reference to 2025 covers the time period of September 1, 2024, through August 31, 2025. In other findings:

  • GitHub hosted 630 million total projects.
  • More than 180 million developers were using GitHub.
  • Pull requests totaled 43.2 million projects.
  • The number of contributions to public projects reached 1.12 billion.

Also notable is that, in 2025, GitHub saw 30% faster fixes of critical severity vulnerabilities, with 26% fewer repositories receiving critical alerts.

(image/jpeg; 0.5 MB)

Page processed in 2.143 seconds.

Powered by SimplePie 1.3, Build 20180209064251. Run the SimplePie Compatibility Test. SimplePie is © 2004–2025, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.