Astro web framework maker merges with Cloudflare | InfoWorld

Technology insight for the enterprise

Astro web framework maker merges with Cloudflare 16 Jan 2026, 2:29 pm

Astro Technology Company, maker of the Astro web framework, has just been acquired by Cloudflare. The company also just released Astro 6 in beta, with the update featuring a redesigned development server.

The merger with Cloudflare means Astro remains open source and MIT-licensed and continues to be actively maintained, according to the January 16 announcement. In addition, Astro will continue to support a wide set of deployment targets, not only Cloudflare, and Astro’s open governance and development roadmap remain in place. Full-time employees of The Astro Technology Company are now employees of Cloudflare, said the announcement.

A separate blog post on January 13 provided instructions for accessing the Astro 6 beta. The Astro 6 development server refactor brings Astro’s development and production code paths much closer and increases Astro’s stability on all runtimes, according to the announcement. The release also unlocks first-class support for Astro on the Cloudflare Workers platform for building applications across the Cloudflare global network.

Astro 6 also features the Content Security Policy (CSP) feature, which helps protect sites against cross-site scripting (XSS) and other code injection attacks by controlling which resources can be loaded. Previously released as an experimental feature in Astro 5.9, CSP is stable in Astro 6. Also stabilized is Astro 5.10’s experimental Live Content Collections (LCC) feature. LCC builds on Astro’s Type-safe Content Collections feature, which lets users fetch content either locally or from a CMS, API, database, or other sources, with a unified API working across all content.

(image/jpeg; 0.74 MB)

Visual Studio Code adds agent development extension 16 Jan 2026, 12:35 pm

Microsoft is offering a Microsoft Copilot Studio extension for its Visual Studio Code editor, enabling developers to build and manage Copilot Studio agents from VS Code.

Launched January 14, the extension can be accessed from the Visual Studio Marketplace. The extension is intended to make it possible to develop AI agents in a familiar editor, with source control, and with AI help when wanted, according to Microsoft. The tool provides language support, IntelliSense code suggestions and completions, and authoring capabilities for Copilot Studio agent components. Microsoft explained that as agents grow beyond a few topics and prompts, teams need the same development “hygiene” used for apps: source control, pull requests, change history, and repeatable deployments. The VS Code extension brings this workflow to Copilot Studio so developers can collaborate without losing velocity or governance, the company said.

With this extension, developers can build and refine a Copilot Studio agent with AI help in the same place they write other code. Developers can use GitHub Copilot, Claude Code, or any VS Code AI assistant to draft new topics, update tools, and quickly fix issues in an agent definition, then sync changes back to Copilot Studio to test and iterate. Microsoft has designed the agent for the way developers work, with support for standard Git integration for versioning and collaboration, pull request-based reviews, and auditability over time, with a history of modifications. The extension also supports VS Code ergonomics, with keyboard shortcuts, search, navigation, and a local dev loop.

(image/jpeg; 0.24 MB)

Google tests BigQuery feature to generate SQL queries from English 16 Jan 2026, 10:54 am

Google is previewing a new AI-driven feature in its BigQuery data warehouse that generates parts of SQL queries from natural-language comments, something it claims will speed up data analysis and lower the barrier to working with complex queries as enterprises look to simplify data access in order to move their AI pilots into production.

The new feature, Comments to SQL, will enable developers and data analysts to write natural-language instructions in SQL comments and have them translated into executable queries inside BigQuery Studio.

To get started, users need to enable the SQL Generation widget inside the Studio and then write natural-language instructions directly inside SQL comments delineated by /* and */ — for example, describing the columns, dataset, and filters they want to apply, Gautam Gupta, machine learning engineering manager at Google, wrote in a blog post.

Those instructions can then be converted into SQL by clicking the Gemini gutter button and selecting the “Convert comments to SQL” option, which generates the corresponding query and displays a diff view showing how the comments were translated into executable SQL, he wrote, adding that developers can also refine instructions to get to the desired output, which shows up in the expanded view.

He provided several examples of the Comments to SQL converter at work, including this outline of a query in which the user calls for a window function and ranked data:

SELECT /* product name, monthly sales, and rank of products by sales within each category */
FROM /* sales_data */
WHERE /* year is 2023 */
WINDOW /* partition by category order by monthly sales descending */

That, he wrote, would generate the following SQL query:

SELECT
    product_name,
    SUM(monthly_sales) AS total_monthly_sales,
    RANK() OVER (PARTITION BY category ORDER BY SUM(monthly_sales) DESC) AS sales_rank
FROM
    `sales_data`
WHERE
    EXTRACT(YEAR FROM sale_date) = 2023
GROUP BY
    product_name, category, EXTRACT(MONTH FROM sale_date)

But it’s still a far cry from being able to turn something like “/* give me a list of products by category, ranked by monthly sales in 2023 */” into a working query that does what the user wants.

Minimizing friction in day to day tasks

Robert Kramer, principal analyst at Moor Insights and Strategy, said those working with data tend to think in terms of questions and outcomes, not syntax. “Translating intent into accurate and efficient SQL still takes time, especially with joins, time logic, and repetitive patterns. By allowing natural language expressions inside SQL comments, Google is trying to speed up that translation while keeping SQL as the execution layer,” he said.

With the new feature, teams could spend more time interpreting results and less time writing and rewriting queries, creating more automated analytics processes down the road while speeding up insights, minimizing team handoffs, and saving time on query setup, he added.

Google has continued to add AI-driven features to BigQuery to help developers and data analysts with SQL querying.

Last November, Google added three new managed AI-based SQL functions — AI.IF, AI.CLASSIFY, and AI.SCORE — to help enterprise users reduce the complexity of running large-scale analytics, especially on unstructured data.

These functions can be used to filter and join data based on semantic meaning using AI.IF in WHERE or ON clauses, categorize unstructured text or images with AI.CLASSIFY in GROUP BY clauses, and rank rows by natural language criteria through AI.SCORE in ORDER BY clauses.

Before that, in August, Google made incremental updates to the data engineering and data science agents in BigQuery that it had announced in April during its annual Google Cloud Next event to help automate data analytics tasks.

While the data engineering agent can help with pipeline building, data transformation and pipeline troubleshooting, the data science agent can automate end-to-end data science workflows, from creating multi-step plans through generating and executing code, reasoning about the results, and presenting findings.

Industry-wide shift

Google isn’t the only data warehouse and analytics service provider that is trying to integrate AI into SQL.

While Databricks already offers AI Functions that can be used to apply generative-AI or LLM inference directly from SQL or Python, Snowflake provides AI_PARSE_DOCUMENT, AISQL, and Cortex functions for document parsing, semantic search, and AI-driven analytics.

Other warehouses, such as Oracle’s Autonomous Data Warehouse, also support AI workflows alongside SQL.

(image/jpeg; 21.15 MB)

Enterprise Spotlight: Setting the 2026 IT agenda 16 Jan 2026, 10:20 am

IT leaders are setting their operations strategies for 2026 with an eye toward agility, flexibility, and tangible business results. 

Download the January 2026 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network World and learn about the trends and technologies that will drive the IT agenda in the year ahead.

(image/jpeg; 2.07 MB)

Google Vertex AI security permissions could amplify insider threats 16 Jan 2026, 6:46 am

The finding of fresh privilege-escalation vulnerabilities in Google’s Vertex AI is a stark reminder to CISOs that managing AI service agents is a task unlike any that they have encountered before.

XM Cyber reported two different issues with Vertex AI on Thursday, in which default configurations allow low-privileged users to pivot into higher-privileged Service Agent roles. But, it said, Google told it the system is just working as intended.

“The OWASP Agentic Top 10 just codified identity and privilege abuse as ASI03 and Google immediately gave us a case study,” said Rock Lambros, CEO of security firm RockCyber. “We’ve seen this movie before. Orca found Azure Storage privilege escalation, Microsoft called it ‘by design.’ Aqua found AWS SageMaker lateral movement paths, AWS said ‘operating as expected.’ Cloud providers have turned ‘shared responsibility’ into a liability shield for their own insecure defaults. CISOs need to stop trusting that ‘managed’ means ‘secured’ and start auditing every service identity attached to their AI workloads, because the vendors clearly aren’t doing it for you.”

Sanchit Vir Gogia, chief analyst at Greyhound Research, said the report is “a window into how the trust model behind Google’s Vertex AI is fundamentally misaligned with enterprise security principles.” In these platforms, he said, “Managed service agents are granted sweeping permissions so AI features can function out of the box. But that convenience comes at the cost of visibility and control. These service identities operate in the background, carry project-wide privileges, and can be manipulated by any user who understands how the system behaves.”

Google didn’t respond to a request for comment. 

The vulnerabilities, XM Cyber explained in its report, lie in how privileges are allocated to different roles associated with Vertex AI. “Central to this is the role of Service Agents: special service accounts created and managed by Google Cloud that allow services to access your resources and perform internal processes on your behalf. Because these invisible managed identities are required for services to function, they are often automatically granted broad project-wide permissions,” it said. “These vulnerabilities allow an attacker with minimal permissions to hijack high-privileged Service Agents, effectively turning these invisible managed identities into double agents that facilitate privilege escalation. When we disclosed the findings to Google, their rationale was that the services are currently ‘working as intended.’”

XM Cyber found that someone with control over an identity with even minimal privileges consistent with Vertex AI’s “Viewer” role, the lowest level of privilege, could in certain circumstances manipulate the system to retrieve the access token for the service agent and use its privileges in the project.

Gogia said the issue is alarming. “When a cloud provider says that a low-privileged user being able to hijack a highly privileged service identity is ‘working as intended,’ what they are really saying is that your governance model is subordinate to their architecture,” he said. “It is a structural design flaw that hands out power to components most customers don’t even realize exist.”

Don’t wait for vendors to act

Cybersecurity consultant Brian Levine, executive director of FormerGov, was also concerned. “The smart move for CISOs is to build compensating controls now because waiting for vendors to redefine ‘intended behavior’ is not a security strategy,” he said.

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, warned, “A malicious insider could leverage these weaknesses to grant themselves more access than normally allowed.” But, he said, “There is little that can be done to mitigate the risk other than, possibly, limiting the blast radius by reducing the authentication scope and introducing robust security boundaries in between them.” However, “This could have the side effect of significantly increasing the cost, so it may not be a commercially viable option either.”

Gogia said the biggest risk is that these are holes that will likely go undetected because enterprise security tools are not programmed to look for them. 

“Most enterprises have no monitoring in place for service agent behavior. If one of these identities is abused, it won’t look like an attacker. It will look like the platform doing its job,” Gogia said. “That is what makes the risk severe. You are trusting components that you cannot observe, constrain, or isolate without fundamentally redesigning your cloud posture. Most organizations log user activity but ignore what the platform does internally. That needs to change. You need to monitor your service agents like they’re privileged employees. Build alerts around unexpected BigQuery queries, storage access, or session behavior. The attacker will look like the service agent, so that is where detection must focus.”

He added: “Organizations are trusting code to run under identities they do not understand, performing actions they do not monitor, in environments they assume are safe. That is the textbook definition of invisible risk. And it is amplified in AI environments, because AI workloads often span multiple services, cross-reference sensitive datasets, and require orchestration that touches everything from logs to APIs.”

This is not the first time Google’s Vertex AI has been found vulnerable to a privilege escalation attack: In November 2024, Palo Alto Networks issued a report finding similar issues with the Google Vertex AI environment, problems that Google told Palo Alto at the time that it had fixed.

This article first appeared on CSO.

(image/jpeg; 0.91 MB)

MongoDB releases mongot source code to boost RAG and AI workloads 16 Jan 2026, 5:48 am

MongoDB has released the source code of mongot, the engine that powers MongoDB Search and Vector Search, under the Server Side Public License. Analysts say the move would help developers of the self-managed version of the database plan better RAG systems for AI use cases, as the code will provide more transparency, debuggability, and control.

By making mongot’s source code publicly available, MongoDB is turning what was previously an Atlas-only (managed version of the database), opaque service into inspectable components, allowing developers to understand how text and vector queries are indexed, executed, and ranked, said Sanjeev Mohan, principal analyst at SanjMo.

The shift is expected to resonate particularly with teams building AI and retrieval-augmented generation (RAG) applications, where visibility into search behavior and failure modes is increasingly critical as systems move from pilots to production, Mohan added.

However, ISG’s executive director of software research, David Menninger, cautioned that developers shouldn’t consider the mongot’s code as open source since it is publicly available.

“Like open-source licenses, the SSPL enables developers to view, use, modify, and share the related source code. It does not meet all the criteria of the Open Source Initiative’s Open Source definition, however, as it requires that anyone incorporating SSPL-licensed code into products offered to an external party (e.g., customer, partner) as a service must release the entirety of their source code for their product under the SSPL,” Menninger said.

But that doesn’t stop developers from using it build applications for their own consumption, said Bradley Shimmin, lead of the data and analytics practice at The Futurum Group.

Rather, the SSPL is “designed specifically” to stop MongoDB’s competitors from taking its free code and selling it as a managed service without paying for it, Shimmin said.

Lowering barriers to adoption

The development could lower barriers to adoption of MongoDB’s offerings, analysts say.

“Previously, if a developer wanted the full MongoDB search experience, they had to be on its managed cloud, Atlas. By releasing the source code, MongoDB is effectively removing the functional wall between their cloud service and their self-managed/Community version,” said Stephanie Walter, practice lead for the AI stack at HyperFRAME Research.

Developers can now test the engines in a local environment, without an internet connection, a credit card, or the need to spin up an Atlas cloud cluster, according to The Futurum Group’s Shimmin.

Analysts say that MongoDB is trying to retain developers with this move, given that the database market is heading towards consolidation, especially around AI applications and use cases.

Typically, most businesses would want to start their AI application development journey on specialized vector databases, but if a developer can test, build, and scale AI systems within MongoDB’s ecosystem, they are less likely to churn, Walter said.

In addition to releasing the source code for mongot, the database provider has also extended the automated embedding capability inside its Vector Search to the Community Edition of the database.

The capability, which automates the process of generating, storing, and updating vector embeddings, reduces complexity for developers when designing a RAG system. Traditionally, developers have needed to construct a pipeline to create and manage vector embeddings, especially for newly ingested data.

Analysts also view the inclusion of the automated embedding capability inside the Community edition as yet another step in MongoDB’s broader effort to challenge rival database providers, especially specialty vector databases.

“This is a direct shot at Pinecone. If the database you already use can handle the complex embedding pipeline for you, there’s really little reason to buy a separate vector-only database,” Walter said.

In addition to Walter, Shimmin believes that the move to add automated embeddings as a capability inside the Community Edition also hurts “glue code” vendors like LangChain.

“It also puts pressure on specialized vector database players to offer more than just storage,” Shimmin added. The automated embeddings capability and mongot are still in preview.

(image/jpeg; 7.73 MB)

How Ansible does the real work in hyperautomation 16 Jan 2026, 2:00 am

Forget the sci-fi fantasy of robots running everything. The real story isn’t about flashy tech, it’s about a smarter way to work. Hyperautomation is that smarter way: A deliberate, all-hands-on-deck strategy where organizations systematically find, prioritize and overhaul their processes. It’s the art of weaving together the right digital tools, not just to replace a single task, but to reinvent entire workflows from start to finish.

Clicking this to DevOps and SRE teams, automation is a daily reality. However, hyperautomation represents a significant step beyond standard scripts and scheduled tasks. It’s not a single software package you can buy; rather, it’s a business-driven, disciplined approach used to rapidly identify and automate as many IT and business processes as possible.

Hyperautomation involves the strategic orchestration of multiple technologies — such as AI, machine learning (ML), robotic process automation (RPA) and event-driven architecture — to create an integrated framework for end-to-end process optimization.

Think of it as conducting a symphony, not just playing one instrument. This integrated approach brings powerful technologies together:

  • RPA. Your reliable digital assistant. It tackles the tedious, repetitive work, entering data, moving files and filling forms, freeing your team from the monotony.
  • Infrastructure as Code (IaC). The architect behind the scenes. It ensures your infrastructure is created, configured and managed through automated, repeatable code, not manual effort. With IaC, your environments become consistent, predictable and scalable, giving your automation a rock-solid foundation to stand on.
  • AI & ML. The intuition behind the operation. These tools spot patterns, make informed predictions and handle decisions that used to require human judgment.
  • Natural language processing (NLP). The bridge to unstructured information. It allows systems to read between the lines, understanding text in emails, documents and forms just like a person would.
  • Intelligent workflow automation. The master coordinator. It seamlessly connects tasks across different departments and software, ensuring nothing falls through the cracks.
  • Process mining & intelligence. Your business’s x-ray vision. By analyzing digital footprints left in your systems, it reveals the hidden bottlenecks and inefficiencies you never knew existed, showing you exactly where to focus.

Where Ansible fits the ecosystem

In the hyperautomation “stack,” Ansible serves as a primary execution engine for infrastructure and configuration. If AI and workflow engines represent the “brains” making decisions, Ansible represents the “hands” that perform the actual work in the real world.

While tools like ServiceNow handle IT Service Management (ITSM) and orchestration, Ansible provides the Infrastructure-as-Code (IaC) capabilities necessary to modify cloud environments, deploy clusters or patch thousands of servers across hybrid landscapes.

Differentiating hyperautomation from traditional automation

The shift from traditional methods to hyperautomation is a fundamental paradigm shift in how complexity is managed:

  • Traditional automation. Typically focuses on simple, predefined tasks using script-based rule engines. These are often isolated “islands” of automation that handle structured data but stall when faced with complex, cross-departmental workflows.
  • Hyperautomation. Adopts a holistic, end-to-end approach. It integrates various processes across departments, handles both structured and unstructured data, and creates systems that can learn from data and adapt to changes over time.

Real-world synergy: Automated incident remediation

A practical example of this synergy is the integration between an ITSM platform (like ServiceNow) and Ansible to handle cloud infrastructure failures.

Imagine a scenario where AWS CloudWatch detects a spiking CPU on a mission-critical EC2 instance. In a traditional setup, this might trigger an email to an SRE who manually investigates. In a hyperautomated workflow:

Graphic: Hyperautomation workflow

Raul Leite

Core benefits for engineering teams

Implementing Ansible within a hyperautomation framework delivers measurable operational gains:

  • Scalability. Orchestrates consistent execution across multiple instances and global operations that would otherwise be unmanageable.
  • Consistency. Enforces policies uniformly across AWS, Azure and on-premise environments, eliminating “configuration drift”.
  • Reduced human error. Minimizes the variability of manual data handling, which is a primary cause of operational disruptions.
  • Faster delivery. Organizations can achieve a 30-50% reduction in implementation time compared to sequential manual methods.

Addressing the misconception: Ansible ≠ Hyperautomation

A common pitfall for engineering teams is assuming that “Ansible alone equals hyperautomation.” Because hyperautomation is a modular strategy, no single tool can manage the entire lifecycle.

Ansible is exceptional at execution, but a complete strategy requires integration with:

  • Workflow engines/ITSM platforms. To coordinate complex sequences of interdependent tasks across teams (e.g., ServiceNow, Ivanti).
  • CI/CD pipelines. To ensure that automation is part of the continuous delivery lifecycle.
  • Process intelligence tools. To identify bottlenecks and activities suitable for automation before deployment.

Conclusion

Lastly, Ansible is a critical enabler of hyperautomation because it provides the reliable, audit-friendly execution layer needed to transform strategy into tangible action. However, it is only one instrument in the orchestra. To achieve true hyperautomation, you must surround Ansible’s execution power with the “intelligence” of AI/ML and the “orchestration” of high-level workflow platforms to create a harmonious business symphony.

If hyperautomation is like a self-driving car, the AI is the navigation system and sensors, the workflow engine is the central computer coordinating everything, and Ansible is the engine and transmission that actually moves the vehicle forward.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

(image/jpeg; 5 MB)

Caught in the great SaaS squeeze 16 Jan 2026, 1:00 am

In the ever-shifting landscape of enterprise technology, it has become increasingly clear during the past few years that on-premises software deployments are heading for the door—and not necessarily by customer choice. As I’ve observed time and again, the balance of power has shifted from buyers to vendors, and nowhere is this more evident than in the changes sweeping through the software-as-a-service (SaaS) arena. The latest move from Epicor, long known for its strong ties to clients in the manufacturing and distribution sector, should leave no one in doubt: The era of on-premises enterprise resource planning (ERP) is drawing to a close, and vendors are now in the driver’s seat.

Epicor’s recent announcement sets a defined sunset period for its on-prem releases of Kinetic, Prophet 21, and BisTrack. After years of signaling a cloud-first future, Epicor has made it official: Customers relying on legacy versions will need to move to Epicor Cloud (its SaaS platform hosted on Microsoft Azure) if they want access to innovation, new features, or even long-term support. There are some phased timelines for continued support, but the direction is unmistakable.

This isn’t a move driven by customer demand. It represents the culmination of long-standing vendor priorities: SaaS is less expensive to support, is easier to secure and update centrally, and is a much simpler model when it comes to rolling out AI-powered features and analytics. As an architect who has spent decades advising large organizations through technology shifts, I certainly understand the allure for vendors. Managing a single, cloud-based code base instead of multiple on-prem versions can cut costs dramatically and accelerate upgrades and fixes. For Epicor and nearly every other software company I track, centralization is now the name of the game.

Vendor benefits aren’t buyer benefits

I first heard about Epicor’s decision when one of my long-time clients, a company for whom ERP reliability is mission-critical, reached out with deep concerns. Like so many others, they’re being pushed into the cloud not by positive business drivers, but by the withdrawal of the on-premises option. Their worries are far from theoretical. Just last year, major outages reminded us that the cloud, for all its strengths, is no panacea for risk. Add legitimate worries about latency, compliance, and new security models, and it’s clear that this transition creates anxiety right alongside opportunity.

Let’s be clear about what’s motivating this trend. For Epicor and its peers, moving to SaaS means they can focus their resources, lower support costs, accelerate innovation, and simplify patching, security, and integrations. With Epicor Cloud, for example, every customer runs the same core code, patches are pushed universally, and operating expenses fall as a result. It’s a sound business strategy for vendors to gain recurring revenue, less version sprawl, and a more streamlined engineering organization.

That efficiency often comes at the expense of customer choice. Enterprises are asked to cede infrastructure control, accept new dependencies, and trust that the vendor-managed environment will meet all their requirements for security, latency, uptime, and regulatory compliance—sometimes with only limited visibility or contractual recourse. For organizations that selected on-prem software precisely because of their unique needs, this is a seismic change that can’t be solved by simply “lifting and shifting” their applications.

What to do if a vendor goes SaaS-only

Transitioning to a vendor-managed SaaS solution isn’t simply a technical migration; it’s a shift in risk, responsibility, and, often, vendor relationship dynamics. Here are five things every enterprise should do if a software provider eliminates on-prem options:

First, scrutinize compliance and regulatory issues in detail. Moving to a SaaS-based ERP managed by your vendor (running it on infrastructure you didn’t choose—like Microsoft Azure) means that your compliance posture changes overnight. You need to ask tough questions:

  • Does the SaaS provider offer transparency on data residency?
  • Are all compliance obligations addressed in the contract
  • What are the penalties or remedies if there are problems?
  • If you’re in a tightly regulated industry, does the SaaS environment support regional or sovereign cloud options?
  • Does your current compliance status automatically carry over? Don’t assume it will. Demand specifics and supporting evidence.

Second, rigorously test for performance and latency impacts before going live. SaaS centralizes services, often far from your physical location or core operational regions. Conduct benchmarks using real-world workloads and user locations, not just vendor-provided averages. If you have shop-floor automation or latency-sensitive workflows, even small delays can ripple through production. Ask for data on service location, network path, and real-world latency. Your vendor should be able to provide performance service-level agreements. If not, press them on the details.

Third, evaluate the new support model. Vendor-led SaaS support can be both a blessing and a curse. Although you might get improved ticket response and universal updates, you lose the ability to self-manage outages or roll back changes. You’ll need to ask about:

  • How quickly will the vendor respond to your issues?
  • What escalation paths exist if downtime occurs?
  • What are the remedies if vendor upgrades break custom integrations?
  • How will the vendor communicate with you regarding incident response times and levels of support, particularly for business-critical processes?

Fourth, assess the deeper risks of putting your core data and business processes on an external cloud managed by a third party. Be proactive in contract negotiations and seek for guaranteed data access in worst-case scenarios. Also, find out:

  • How does the vendor handle data isolation?
  • What are your options for backup and disaster recovery outside their walled garden?
  • In the event of a prolonged cloud provider outage, such as we saw last year, what operational continuity measures are in place?
  • Is the vendor offering clear recovery time objectives (RTOs) and recovery point objectives (RPOs)?

Finally, always consider alternatives and plan an exit strategy. If the vendor’s SaaS model simply cannot meet key requirements—whether for compliance, latency, or control—it’s time to decide if the relationship should continue. Market offerings such as hybrid or specialized SaaS providers, managed private clouds, or open source ERP solutions may offer a better fit for niche cases. If you proceed with the SaaS transition, make sure you can get your data out in standard formats, with contractual guarantees to support migration should you need to change direction in the future.

New skills and new vigilance

Epicor (and many other software vendors) is betting that enough customers will accept the trade-offs for the benefits of simplicity, speed, and SaaS-enabled innovation, especially as the costs and overhead of on-prem maintenance continue to rise. But for those who justifiably worry about the loss of control and new risk models, this transition will demand rigorous contract negotiation, tighter governance, and a willingness to explore new architectures or even new providers.

Enterprises must approach this new reality with clear eyes and strong questions. SaaS can be a win for both sides, but only if risk is managed, compliance is guaranteed, and business-critical processes won’t be left to the mercy of a single vendor or cloud provider. In this new era, success will not go to the bold or the cautious, but to the best prepared.

(image/jpeg; 21.04 MB)

PHP language still relevant, advocate insists 15 Jan 2026, 9:32 pm

A longtime leading programming language for web development dating back to 1995, PHP has lost the spotlight to languages such as Python and JavaScript in recent years. Nonetheless, a PHP specialist at PHP software vendor Perforce Zend is stressing the continued importance of the language.

“Is PHP still relevant in 2026? Short answer: Yes, and it shows no signs of going anywhere,” said Matthew Weier O’Phinney, principal product manager at Perforce Zend and OpenLogic, in a January 15 blog post. O’Phinney developed web applications on the PHP-based Zend Framework even before its public release and led the Zend open-source project from 2009 to 2019. PHP, he said, has been the silent workhorse of the modern web for more than three decades. “In fact, many users are interacting with PHP every day without realizing it,” he said, citing PHP usage in the Drupal and WordPress content management systems. Frameworks such as Laravel and Symfony also use Zend, he added. “From personal blogs to complex enterprise systems, PHP’s usage remains widespread, even as newer technologies emerge and grow,” O’Phinney said.

“While it is true that PHP usage has declined slightly in recent years, it remains the most popular choice for server-side languages by a wide margin,” O’Phinney stressed. And with advancements in PHP 8.x, performance is rarely a bottleneck for PHP web applications, he said. The JIT (just in time) compiler and improvements to the Zend Engine ensure that PHP handles high-concurrency requests efficiently, he added. This past November saw the release of PHP 8.5, featuring an extension for securely parsing URIs and URLs. PHP also has proven itself highly adaptable to cloud-native and containerized deployment, O’Phinney added. “The language easily integrates with containerization tools like Docker, empowering teams to build lightweight, isolated PHP environments that are consistent across development, testing, and production stages.”

O’Phinney also examined PHP matchups with other languages including Python and Java. “If your web application relies heavily on real-time data processing, predictive analytics, or ML models, Python is likely the better choice,” he said. But Python frameworks such as Django and Flask do not inherently outperform PHP in standard web-serving tasks, said O’Phinney. Java, meanwhile, remains a popular PHP alternative for massive, complex enterprise-grade systems, he said. “However, Java development is typically slower and more resource-intensive than PHP development,” he said.

In this month’s Tiobe and Pypl indexes of programming language popularity, PHP ranked 15th and seventh, respectively. Python ranked first and Java third in both indexes.

(image/jpeg; 5.37 MB)

Possible software supply chain attack through AWS CodeBuild service blunted 15 Jan 2026, 2:19 pm

An AWS misconfiguration in its code building service could have led to a massive number of compromised key AWS GitHub code repositories and applications, say researchers at Wiz who discovered the problem.

The vulnerability stemmed from a subtle flaw in how the repositories’ AWS CodeBuild CI (continuous integration) pipelines handled build triggers. “Just two missing characters in a regex filter allowed unauthenticated attackers to infiltrate the build environment and leak privileged credentials,” the researchers said in a Thursday blog.  

The regex (regular expression) filter at the center of the issue is an automated pattern-matching rule that scans log output for secrets and hides them to prevent leakage.

The issue allowed a complete takeover of key AWS GitHub repositories, particularly the AWS JavaScript SDK, a core library that powers the AWS Console.

“This shows the power and risk of supply chain vulnerabilities,” Yuval Avrahami, co-author of the report about the bug, told CSO, “which is exactly why supply chain attacks are on the rise: one small flaw can lead to an insanely impactful attack.”

After being warned of the vulnerability last August, AWS quickly plugged the hole and implemented global hardening within the CodeBuild service to prevent the possibility of similar attacks. Details of the problem are only being revealed now by Wiz and AWS.

AWS told CSO that it “found that there was no impact on the confidentiality or integrity of any customer environment or AWS service.” It also advised developers to follow best practices in using AWS CodeBuild.

But the Wiz researchers warned developers using the product to take steps to protect their projects from similar issues.

Discovery

Wiz discovered the problem last August after an attempted supply chain attack on the Amazon Q VS Code extension. An attacker exploited a misconfigured CodeBuild project to compromise the extension’s GitHub repository and inject malicious code into the main branch. This code was then included in a release which users downloaded. Although the attacker’s payload ultimately failed due to a typo, it did execute on end users’ machines – clearly demonstrating the risk of misconfigured CodeBuild pipelines. 

Wiz researchers investigated and found the core of the flaw, a threat actor ID bypass due to unanchored regexes, and notified AWS. Within 48 hours, that hole was plugged, AWS said in a statement accompanying the Wiz blog.

It also performed additional hardening, including adding further protections to all build processes that contain Github tokens or any other credentials in memory. AWS said it also audited all other public build environments to ensure that no such issues exist across the AWS open source estate.

In addition, it examined the logs of all public build repositories, as well as associated CloudTrail logs, “and determined that no other actor had taken advantage of the unanchored regex issue demonstrated by the Wiz research team. AWS determined there was no impact of the identified issue on the confidentiality or integrity of any customer environment or any AWS service.” 

Kellman Meghu, chief technology officer at Deepcove Cybersecurity, a Canadian-based risk management firm, said it wouldn’t be a huge issue for developers who don’t publicly expose CodeBuild. “But,” he added, “if people are not diligent, I see how it could be used. It’s slick.” 

Developers shouldn’t expose build environments

CSOs should ensure developers don’t expose build environments, Meghu said. “Using public hosted services like GitHub is not appropriate for enterprise code management and deployment,” he added. “Having a private GitLab/GitHub, service, or even your own git repository server, should be the default for business, making this attack impossible if [the threat actors] can’t see the repository to begin with. The business should be the one that owns the repository; [it should] not be something you just let your developers set up as needed.” In fact, he said, IT or infosec leaders should set up the code repositories. Developers “should be users of the system, not the ultimate owners.” 

Wiz strongly recommends that all AWS CodeBuild users implement the following safeguards to protect their own projects against possible compromise.”

  • Secure the CodeBuild-GitHub connection by:
    • generating a unique, fine-grained Personal Access Token (PAT) for each CodeBuild project;
    • considering using a dedicated unprivileged GitHub account for the CodeBuild integration.

(image/jpeg; 35.72 MB)

Gleam update shines on external types 15 Jan 2026, 12:48 pm

Gleam 1.14.0, a new version of the statically typed language for the Erlang VM and JavaScript runtimes, has enhanced support for external types.

Released December 25, the update can be accessed at GitHub. With this release, the @external annotation is now supported for external types, giving the programmer a way to specify an Erlang or TypeScript type definition to be used, according to Gleam creator Louis Pilfold. Gleam’s external type feature is used to declare an Erlang or JavaScript type that can be referenced in Gleam, but because the type originates from outside of Gleam, the Gleam compiler cannot produce a precise definition in generated Erlang or TypeScript type definitions. Instead, the compiler had to fall back to the correct but vague “any” type of each language.

Also enhanced in Gleam 1.14.0 is inference-based pruning, an optimization that improves performance and detects more redundant patterns when pattern matching on binary data. Gleam 1.14.0 extends this optimization to work with int segments, thus increasing its effectiveness.

Gleam 1.14.0 also offers number normalization in pattern matching analysis, resulting in faster code. Numbers can be written in different formats in Gleam (decimal, octal, hexidecimal, etc., or scientific notation could be used to represent floats). The compiler now internally normalizes these values to a single canonical representation. This representation now is used by the pattern matching analysis engine, further enabling optimizations such as interference-based pruning.

Other improvements in Gleam 1.14.0:

  • Equality testing has been made faster. Performance of == and !=has been improved for field-less custom type variants when compiling to JavaScript.
  • The record update syntax now can be used in constant definitions, enabling constant records to be constructed from other constant records.
  • The release updates to the latest Elixir compiler API, fixing some warnings that would be emitted with previous versions of Gleam and the latest version of Elixir.

(image/jpeg; 5.19 MB)

When your platform team can’t say yes: How away-teaming unlocks stuck roadmaps 15 Jan 2026, 2:00 am

Product teams regularly approach platform organizations with requests that make complete business sense. A team launching in new markets needs regional payment processor integration. Another team piloting a new discounting strategy needs a new incentive construct. A third team building an enterprise offering needs custom invoicing capability. These requests are well-scoped and clearly valuable. Yet platform teams must frequently decline them, not because the request lacks merit, but because the platform roadmap is already full of other higher-priority features and capabilities.

Product teams, meanwhile, operate under different constraints. Revenue targets do not adjust for platform capacity. Market launch dates were set months ago. Pricing experiments that could move key metrics cannot wait two quarters for platform prioritization.

This familiar impasse stalls innovation, forces product teams into costly duplication and pits business priorities against engineering reality. While various collaboration models exist (Team Topologies describe interaction modes, or embedded platform experts), these assume the platform team either has capacity to prioritize the work or that product teams should proceed independently.

Away teaming offers a different, resource-neutral mechanism: product teams temporarily assign engineers to build what they need as reusable platform capabilities, under platform guidance. This is the viable way out of the zero-sum game.

Why conventional approaches fail

Platform teams face a resource equation that never balances. Demand consistently exceeds capacity by substantial margins. As a platform engineering leader I receive 3 to 5 times more requests than my team can fulfill in any given quarter.

The standard responses all produce poor outcomes:

  • Product teams wait. They miss launch windows. Competitors ship first. The carefully planned go-to-market strategy becomes irrelevant because the timing is wrong.
  • Product teams build independently. Within months, the organization accumulates duplicated effort: multiple, incompatible implementations of pricing logic, custom invoicing systems that generate reports in incompatible formats and a spiderweb of technical debt that platform teams will eventually have to reconcile at 3 times the original cost.
  • Platform teams accommodate everything. They deliver neither their critical roadmap work nor the product requests well. Engineers burn out. Technical debt compounds. The platform becomes increasingly fragile under the weight of rushed implementations.

The zero-sum framing (platform priorities versus product needs) ensures someone always loses.

How away-teaming restructures the problem

Away teaming inverts the traditional model. Instead of platform engineers embedding with product teams to provide expertise, product engineers temporarily join platform teams to build required capabilities under platform guidance.

This is fundamentally different from the established patterns described in Team Topologies or standard platform collaboration models. While the ‘X-as-a-Service‘ mode assumes the platform team has capacity to fund and build the service, and the ‘Facilitating‘ mode assumes capacity to coach, away teaming addresses the specific scenario where the platform team’s capacity is effectively zero for new feature work, but still has capacity for governance.

Consider an enterprise product team that needs custom invoicing capabilities. The platform team could not prioritize this work. Instead, two engineers from the product team joined the platform team for 8 weeks. Working under platform guidance, they built a general-purpose invoicing service. The product team got their enterprise invoices on schedule. The platform gained a reusable invoicing capability. Four months later, when another product team needed invoicing capability, they could use the same invoicing service. What would have been separate implementations became a single platform capability serving multiple products.

The new resource equation

The product team assigns engineers for a defined period, typically 6 to 8 weeks. These engineers work in the platform codebase, attend platform standups, follow platform coding standards and receive guidance from platform engineers.

Product teams have already secured funding for their initiatives. Away teaming redirects that investment from building a product-specific solution into creating a reusable platform capability.

For platform teams, this expands effective capacity without headcount growth. Platform engineers provide design review, answer questions and conduct code review. They use their expertise without spending execution capacity on implementation work. The platform gains capabilities it could not have funded while maintaining quality standards and consistency.

Establishing the foundation: Essential prerequisites for success

Away teaming is not simply a collaboration technique. It requires specific organizational conditions to function effectively.

1. Executive alignment is critical

This cannot succeed as a platform team initiative alone. Product and platform leadership must jointly commit to the model.

When product teams miss OKRs because engineers were away teaming, product VPs need to view that as an acceptable tradeoff, not a failure. If the VP of product does not openly champion this model, product managers will be incentivized to hoard their engineers, seeing away teaming as a resource drain rather than a strategic investment. The model will die quietly through non-participation.

To successfully pitch this to product leadership, frame the ‘resource loss’ not as a temporary cost, but as a strategic investment in technical risk mitigation. By temporarily funding a reusable platform capability, the product VP is eliminating the 3x reconciliation cost and high-risk technical debt associated with their team building it independently. This protects future velocity and product stability across the organization.

2. The career development framing matters enormously

Product engineers need to view away teaming as a growth opportunity, not a sacrifice.

Frame it explicitly as platform engineering experience that builds broader systems thinking skills and deepens architectural understanding. In organizations that excel at this, an away team assignment is seen as an important point for a Senior Engineer promotion, signaling that the organization values cross-cutting, reusable systems over purely product-specific velocity.

3. Clear governance prevents drift

Someone must decide which requests become away team engagements. Establish a joint platform-product review process that evaluates requests against specific criteria:

  • Does the capability serve multiple future products?
  • Can the platform team provide meaningful guidance?
  • Is the product team willing to accept reduced velocity during the engagement period?

These prerequisites are not optional. Get executive buy-in first. Everything else depends on it.

Execution mechanics: A guide to running away team engagements

Start with a scoping conversation involving both team leads and the engineers who will do the work. Three elements need explicit documentation:

  • The business value the product team must deliver.
  • The platform standards the work must satisfy.
  • The support the platform team will provide.

If any of these three elements cannot be articulated clearly, the opportunity is not suitable for away teaming.

Temporary team membership

Away team members join the platform team operationally. They attend standups, work in platform codebase and receive code review with the same rigor applied to any platform work. This is not a contractor relationship; it is temporary team membership.

However, their connection to their home team remains protected. Regular synchronization with product leadership validates that the work addresses actual requirements.

Guidance, not project management

The platform team provides guidance rather than project management. Platform engineers answer questions about service boundaries and system design, conduct design review and pair on complex challenges. They do not track tasks or manage sprint planning.

Guidance takes real time. Code review for unfamiliar engineers takes longer. A platform team supporting two concurrent away team engagements should expect to spend roughly 10–15 hours per week of senior engineer time on guidance. This is the investment that makes the model work.

Knowledge transfer is non-negotiable

Before an away-teaming engagement concludes, at least one platform engineer must become the ongoing owner of that code. This requires:

  • Documentation that explains the why (alternatives considered, tradeoffs made).
  • Sufficient test coverage.
  • Operational runbooks for production issues.

Away team members typically present their work to the broader platform organization. This ensures the platform team actually understands what they will be maintaining.

Impact & accountability: Measuring success and learning from failure

Away teaming requires measurement to remain credible.

  • Track capability reuse. Aim for capabilities that are adopted by two distinct products (beyond the originating team) within six months of creation. If these capabilities serve only their original product team, the generalization effort fails.
  • Monitor product team velocity impact. A team losing two engineers for eight weeks should expect 15-20% reduced output. If the impact significantly exceeds this, the away team members are either more critical than anticipated or the product team is understaffed.
  • Track engagement outcomes. What percentage of away team engagements deliver working capabilities that meet platform standards? If this falls below 80%, examine common failure modes like insufficient technical depth or inadequate platform support.

When away teaming fails, acknowledge it quickly. Not every engagement will succeed. Failed engagements provide valuable learning about what works and what does not.

When away-teaming does not apply

Some capabilities are too foundational to delegate. Core payment processing that touches every transaction, the base pricing model or revenue recognition logic that must satisfy regulatory requirements all require direct platform ownership even if other work must be delayed.

Away teaming works best for capabilities in the middle ground: too product-specific for immediate platform prioritization, yet general enough that future products will benefit from reuse.

Away teaming also has scale limits. A platform team might effectively support two concurrent away team engagements. Beyond that, guidance capacity becomes strained.

The compounding benefits

The direct value is apparent: platform capabilities that could not otherwise be funded get built. But the secondary effects often prove more valuable:

  • Platform advocacy: Product engineers who complete away team assignments become platform advocates. They understand the architectural tradeoffs and can credibly explain platform limitations, reducing tension and frustration between teams.
  • Distributed capability: These engineers help their product teams use platform capabilities effectively, spot opportunities for future platform work and design features that integrate cleanly with platform services.
  • Compounding capability: Each capability built through away teaming (e.g., custom invoicing, promotional discount engines) becomes available for future products, multiplying the platform’s overall utility.

Platform teams maintain focus on foundational work. Total platform capability expands substantially beyond what direct funding could achieve.

Getting started

Organizations implementing away teaming should begin with one pilot. Choose a product team with a clear platform need and a collaborative orientation. Document how it works, what both teams commit to and how success will be measured. Get explicit executive sponsorship from both platform and product leadership.

The conversation with product teams transforms immediately.

Rather than “we cannot prioritize this,” platform teams propose: “We cannot fund this capability, but you can. Is this important enough to invest engineering bandwidth for a defined period?”

This reframes platform dependency from a blocker into a tractable investment decision that product teams evaluate against their priorities.

Here is the reality that most platform organizations struggle to accept: you will never have enough people to build everything that should be built. Away teaming is not a compromise; it is the funding model that turns a resource constraint into a catalyst for decentralized, reusable capability growth. It is how platform organizations achieve scale while maintaining quality and consistency.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

(image/jpeg; 7.45 MB)

Getting started with GitHub Copilot in Visual Studio or VS Code 15 Jan 2026, 1:00 am

The way software is developed has undergone multiple sea changes over the past few decades. From assembly language to cloud-native development, from monolithic architecture to microservices, from manual testing to CI/CD automation, we’ve seen the emergence of numerous software architectures, technologies, and tools to meet the ever-changing demands of enterprises and their developers.

Most recently, AI-powered tools have impacted software development dramatically. One such tool is GitHub Copilot, a powerful and accessible AI pair programmer that integrates seamlessly into Visual Studio and Visual Studio Code.

In this article, we’ll cover what GitHub Copilot is, why it matters, and how you can use it to generate and optimize code inside Visual Studio. We’ll also examine how GitHub Copilot can fix issues in your code and even help you test it.

What is GitHub Copilot and why do we need it?

Modern software development thrives on speed, accuracy, and innovation. Developers often spend a lot of time writing boilerplate code, integrating APIs, or debugging issues in the source code. Now with the emergence of AI and AI-powered tools and technologies, you can automate all of these time-consuming tasks to boost developer productivity.

GitHub Copilot is an AI-powered coding assistant that can generate code, optimize code, document code, fix issues, create tests, draft pull requests, and enable developers focus on creative, complex problem-solving tasks. GitHub Copilot, which supports models from OpenAI, Anthropic, Google, and others, is much more than a code autocompletion tool. It uses advanced AI models to understand natural-language comments and the context around your code, generate code snippets, automate repetitive tasks, reduce errors, and speed up your software development workflow.

While traditional autocompletion tools suggest code based on syntax, GitHub Copilot understands the purpose of your code, i.e., what the code is intended to accomplish, and generates entire code blocks or code snippets as needed. As a result, developers can be more productive and consistent, and write better code by adhering to the best practices and identifying and fixing issues and bugs early.

How does GitHub Copilot help software developers?

Here are a few ways GitHub Copilot can help you as a developer:

  • Boost productivity: GitHub Copilot can write boilerplate, repetitive, or verbose code in seconds. Hence, you can stay focused on building the architecture, writing business logic, and writing data access code rather than spending your time on mundane tasks.
  • Reduce cognitive load: GitHub Copilot can automate tasks, help write complex business logic, and reduce the need for context switching, reducing cognitive load and helping to prevent burnout.
  • Write efficient code: Using natural-language prompts, developers can generate readable, structured, modular, and consistent code. GitHub Copilot can also help with refactoring, bug fixing, and enforcing best practices.
  • Facilitate faster learning: Beginners and experienced developers alike can learn new libraries, APIs, or frameworks faster from the live examples GitHub Copilot generates.
  • Enhance testing and validation: GitHub Copilot can help you generate tests, explore edge cases, provide remedies, and fix issues in your code.

Install GitHub Copilot in Visual Studio

To install GitHub Copilot using the Visual Studio Installer, follow the steps outlined below.

  1. Launch the Visual Studio Installer.
  2. Choose the installation of Visual Studio you want to use.
  3. Click Modify to launch the next screen to modify workloads as shown in Figure 1.
  4. Select the workload you want to modify.
  5. Select GitHub Copilot and click Modify to start the installation.
Visual Studio Installer - GitHub Copilot

Figure 1

Foundry

This will install GitHub Copilot and integrate it inside your Visual Studio IDE. You can also install GitHub Copilot in Visual Studio Code, but we’ll use Visual Studio here. Whether you’re using Visual Studio or Visual Studio Code, getting started with GitHub Copilot is quick and easy.

Generate code using GitHub Copilot

You can use GitHub Copilot to generate new code giving it instructions in natural language. Just right-click in the code editor inside Visual Studio, click on “Ask Copilot”, and type an instruction. For example, you could ask Copilot to generate code to display all prime numbers between 1 and 100, as shown in Figure 2.

GitHub Copilot generate code

Figure 2

Foundry

Fix bugs in your code using GitHub Copilot

GitHub Copilot can also help fix bugs in your code. Let us understand this with a simple example. Refer to the following code.

string str = null;
for(int i=65; i

The idea behind the code above is to display all alphabetical characters from A to Z. However, there is an issue in the first statement. When you assign null to a string object and then attempt to add a string to it, the compiler will issue an error: “Cannot convert null literal to non-nullable reference or unconstrained type parameter.” This is because, beginning with C# 8, all reference types are non-nullable by default.

Figure 3 shows how GitHub Copilot fixes the code for you.

GitHub Copilot bug fix

Figure 3

Foundry

Optimize your code using GitHub Copilot

Consider the following piece of code that shows two classes, Customer and DataManager. While Customer is just another POCO class (plain old CLR object), DataManager creates an instance of the Customer class using a method called Create that accepts the customer details (i.e., values for all properties of the class) as parameters.

class Customer
{
    public Guid Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Address { get; set; }
    public string Email { get; set; }
}
class DataManager
{
    public Customer Create(Guid Id, string firstName, string lastName, string address, string email)
    {
        Customer customer = new Customer
        {
            Id = Guid.NewGuid(),
            FirstName = firstName,
            LastName = lastName,
            Address = address,
            Email = email
        };
        return customer;
    }
}

To optimize your code using GitHub Copilot, you can select the block of source code in the code editor, right-click and then click on “Ask Copilot”, and type “Optimize” in the input box, as shown in Figure 4.

GitHub Copilot optimize code

Figure 4

Foundry

When you click on the arrow button beside the input box to submit the “Optimize” prompt to Copilot, it will generate the optimized code as shown in Figure 5.

GitHub Copilot optimize code results

Figure 5

Foundry

Create unit tests using GitHub Copilot

Testing is key to reliable software, and GitHub Copilot makes software testing faster and easier. You can use GitHub Copilot to generate boilerplate test code for your functions or classes based on the logic it detects. Moreover, Copilot can provide you with suggestions for unit test cases across different scenarios, including edge cases. Copilot can also help create mocks or stubs for dependencies so that you can get to testing faster.

To generate unit tests with GitHub Copilot, right-click in the code editor inside Visual Studio and click “Ask Copilot”. In the input box, simply enter an instruction such as “Write unit tests for all methods of the DataManager class” as shown in Figure 6 below.

GitHub Copilot unit tests

Figure 6

Foundry

When you click on the arrow button in the input box, Copilot will create a new file in your project named DataManagerTests.cs and generate unit tests for the DataManager class as shown in Figure 7.

GitHub Copilot unit tests results

Figure 7

Foundry

Thus you can see how GitHub Copilot helps you reduce the time you spend writing boilerplate test code and helps you be more productive with test cases.

Key takeaways

By accelerating the coding process, fixing bugs, and writing tests, GitHub Copilot lets you focus on what really matters: building great software. Whether you’re building enterprise-scale distributed systems or rapid-prototyping new ideas, you should try GitHub Copilot and see how it can improve your software development workflow.

I’ll explore more GitHub Copilot capabilities, such as how Copilot can generate integration tests and improve test coverage, in future posts here.

(image/jpeg; 5.11 MB)

For agentic AI, other disciplines need their own Git 15 Jan 2026, 1:00 am

Software engineering didn’t adopt AI agents faster because engineers are more adventurous, or the use case was better. They adopted them more quickly because they already had Git.

Long before AI arrived, software development had normalized version control, branching, structured approvals, reproducibility, and diff-based accountability. These weren’t conveniences. They were the infrastructure that made collaboration possible. When AI agents appeared, they fit naturally into a discipline that already knew how to absorb change without losing control.

Other disciplines now want similar leverage from AI agents. But they are discovering an uncomfortable truth: without a Git-equivalent backbone, AI doesn’t compound. It destabilizes.

What these disciplines need is not a literal code repository, but a shared operational substrate: a canonical artifact, fine-grained versioning, structured workflows, and an agreed-upon way to propose, review, approve, and audit changes.

Consider a simple example. Imagine a product marketing team using an AI agent to maintain competitive intelligence. The agent gathers information, synthesizes insights, and updates a master brief used by sales and leadership. This seems straightforward—until the agent edits the document.

In software, Git handles this effortlessly. Every change has a branch. Every branch produces a diff. Every diff is reviewed. Every merge is recorded. Every version is reproducible. Agents can propose changes safely because the workflow itself enforces isolation and accountability.

Life without version control

For the marketing team, no such backbone exists. If the agent overwrites a paragraph, where is the diff? If it introduces a factual error, where is the audit trail? If leadership wants to revert to last week’s version, what does that even mean? The lack of structure turns AI agents into risks.

This is why Git matters. Not because it is clever, but because it enforces process discipline: explicit change control, durable history, isolated work, and reproducibility. It created a shared contract for collaboration that made modern software engineering possible, and made agentic workflows in software engineering viable.

Other disciplines need structures that mirror these properties.

Take architecture or urban planning. Teams want AI agents to update simulations, explore zoning scenarios, or annotate design models. But without a versioning protocol for spatial artifacts, changes become opaque. An agent that modifies a zoning scenario without a traceable change set is effectively unreviewable.

Or consider finance. Analysts want agents to maintain models, update assumptions, and draft memos. Yet many organizations lack a unified way to version models, track dependencies, and require approvals. Without that substrate, automation introduces new failure modes instead of leverage.

At this point, the Git analogy feels strong—but it has limits.

Software is unusually forgiving of mistakes. A bad commit can be reverted. A merge can be blocked. Even a production outage usually leaves behind logs and artifacts. Version management works in part because the world it governs is reversible.

Many other disciplines are not.

Pulling irreversible levers

Consider HR. Imagine an organization asking an AI agent to terminate a vendor contract with “Joe’s Plumbing.” The agent misinterprets the context and instead terminates the employment of a human employee named Joe Plummer. There is no pull request. No staging environment. No clean revert. Payroll is cut, access is revoked, and legal exposure begins immediately. Even if the error is caught minutes later, the damage is already real.

This is the critical distinction. In non-code domains, actions often escape the system boundary. They trigger emails, revoke credentials, initiate payments, or change legal status. Version history can explain what happened, but it cannot undo it.

This means a Git-style model is necessary, but insufficient.

Applying diffs, approvals, and history without respecting execution boundaries creates a false sense of safety. In these domains, agents must be constrained not just by review workflows, but by strict separation between proposal and execution. Agents should prepare actions, simulate outcomes, and surface intent—without directly pulling irreversible levers.

Several patterns from software translate cleanly: durable history creates accountability; branching protects the canonical state; structured approvals are the primary mechanism of resilience; reproducibility enables auditing and learning.

Disciplines that lack these properties will struggle to govern AI agents. Tools alone won’t fix this. They need norms, repeatable processes, and artifact structure—in short, their own Git, adapted to their risk profile.

The lesson from software is not that AI adoption is easy. It is that adoption is procedural before it is technical. Git quietly orchestrates isolation, clarity, history, and review. Every discipline that wants similar gains will need an equivalent backbone—and, where mistakes are irreversible, a way to keep the genie in the bottle.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

(image/jpeg; 11.55 MB)

What is GitOps? Extending devops to Kubernetes and beyond 15 Jan 2026, 1:00 am

Over the past decade, software development has been shaped by two closely related transformations. One is the rise of devops and continuous integration and continuous delivery (CI/CD), which brought development and operations teams together around automated, incremental software delivery.

The other is the shift from monolithic applications to distributed, cloud-native systems built from microservices and containers, typically managed by orchestration platforms such as Kubernetes.

While Kubernetes and similar platforms simplify many aspects of running distributed applications, operating these systems at scale is still complicated. Configuration sprawl, environment drift, and the need for rapid, reliable change all introduce operational challenges. GitOps emerged as a way to address those challenges by extending familiar devops and CI/CD techniques beyond application code and into infrastructure and system configuration.

At the heart of GitOps is the concept of infrastructure as code (IaC). In a GitOps model, not only application code but also infrastructure definitions, deployment configurations, and operational settings are described in files stored in a version control system. Automated processes continuously compare the running system with those declarations and work to bring the live environment back into alignment when differences appear.

In this approach, the version control repository serves as the system of record for how applications and their supporting infrastructure should look in production. Changes flow through the same review, approval, and automation pipelines that developers already use for software, bringing greater consistency, traceability, and repeatability to cloud-native operations.

At a high level, GitOps refers to a set of operational practices for managing cloud-native systems using declarative configuration, version control, and automated reconciliation. Rather than treating infrastructure and application configuration as mutable runtime state, GitOps treats them as versioned artifacts that move through the same review, testing, and deployment processes as application code.

GitOps defined

The term GitOps was originally coined and popularized by Weaveworks, which helped formalize the approach in the context of Kubernetes operations. While that early work shaped the way GitOps was discussed and implemented, GitOps has since evolved into a broadly adopted, vendor-neutral pattern. Today, it describes a shared set of ideas rather than a specific product or platform.

The defining characteristic of GitOps is its reliance on declarative configuration stored in a version control system. Instead of issuing imperative commands to change live systems, teams describe the desired state of applications and infrastructure in configuration files. Automated agents then continuously compare that declared state with what is actually running and work to reconcile any differences. This pull-based model—where systems converge toward the desired state defined in version control—provides built-in drift detection, repeatability, and a clear audit trail for every change.

Because GitOps centers on configuration files stored in a version control system, familiar software development practices carry over naturally. Changes are proposed through commits, reviewed before being accepted, and tracked over time. Rollbacks are accomplished by reverting to known-good versions, and the history of how a system evolved is preserved alongside the configuration itself.

While the use of Git as the version control system is not strictly required, it has become the default choice because of its ubiquity in modern devops workflows and its strong support for collaboration and change management, so its place in the name has stuck.

What is the CI/CD process?

A complete look at CI/CD is beyond the scope of this article—see the InfoWorld explainer on the subject—but we need to say a few words about CI/CD because it’s at the core of how GitOps works. The continuous integration half of CI/CD is enabled by version control repositories like Git: Developers can make constant small improvements to their codebase, rather than rolling out huge, monolithic new versions every few months or years. The continuous deployment piece is made possible by automated systems called pipelines that build, test, and deploy the new code to production.

Again, we keep talking about code here, and that usually summons up visions of executable code written in a programming language such as C or Java or JavaScript. But in GitOps, the “code” we’re managing is largely made up of configuration files. This isn’t just a minor detail — it’s at the heart of what GitOps does. These config files are, as we’ve said, the “single source of truth” describing what our system should look like. They are declarative rather than instructive. That means that instead of saying “start up ten servers,” the configuration file will simply say, “this system includes ten servers.”

GitOps and Kubernetes

GitOps first took hold in the Kubernetes ecosystem, where declarative configuration and continuous reconciliation are core design principles. As a result, Kubernetes remains the most common and best-understood environment for applying GitOps practices. A typical GitOps-driven update process for a Kubernetes application looks like this:

  1. A developer proposes a change by committing updated application code or configuration to a version control repository, usually through a pull request.
  2. That change is reviewed and approved, then merged into the main branch.
  3. The merge triggers an automated CI/CD pipeline that tests the change, builds new artifacts if needed, and publishes them to a registry.
  4. A GitOps controller or similar automated agent detects the updated desired state stored in version control.
  5. The controller compares that desired state with the current state of the Kubernetes cluster and applies the necessary changes to bring the cluster back into alignment.

This pull-based reconciliation loop—where the cluster continuously converges toward the desired state defined in version control—is central to how GitOps works in practice. While Kubernetes provides a natural fit for this model, it represents just one canonical use case. The same patterns increasingly apply to infrastructure provisioning, policy enforcement, and multi-cluster operations beyond Kubernetes itself.

GitOps tooling in practice: Argo CD, Flux, and the ecosystem

GitOps is enabled by a set of tools that embody the principles we’ve outlined, with some open-source projects emerging as de facto standards in cloud-native environments.

At the center of the GitOps ecosystem is Argo CD, an open-source controller that continuously monitors a version control repository and ensures that the state of running systems matches the declared desired state. Argo CD is widely used in Kubernetes environments because it directly implements pull-based reconciliation: it compares the desired state stored in Git with the cluster’s actual state and applies changes to correct any drift.

Alongside Argo CD, Flux is another prominent open source GitOps engine. Both Flux and Argo CD help teams adopt GitOps workflows by managing the synchronization loop between code and runtime, but they differ in operational philosophy, integration surfaces, and ecosystem fit.

GitOps tooling often appears as part of broader platforms or integrated stacks rather than as isolated utilities. For example, multicloud and cluster management solutions now routinely include GitOps support, with Argo CD or compatible controllers bundled alongside deployment, policy, and governance capabilities.

In addition to Flux and Argo CD, a range of auxiliary tools contribute to a complete GitOps ecosystem: policy as code engines (e.g., Open Policy Agent), drift detection systems, and infrastructure provisioning tools that mesh with Git-centric workflows.

GitOps, devops, and normalization

GitOps grew out of the same forces that drove devops into mainstream IT practice, and in its early days, GitOps was often discussed as a distinct extension of devops, specifically tailored to managing declarative infrastructure and Kubernetes-centric systems. At the time, GitOps was still relatively new and not yet widely adopted outside cloud-native pioneers.

Over the last several years, however, GitOps practices have become deeply woven into how teams operate modern cloud environments. Rather than being treated as an optional add-on or marketing term, the core ideas of GitOps — using version-controlled, declarative configuration and automated reconciliation loops to continuously align running systems with intended state — are now part of standard operational practice in many Kubernetes-centric shops. In this sense, GitOps has shifted from a buzzword about what might be possible to a baseline pattern for cloud-native operations, much like devops itself did years earlier.

In environments where Kubernetes and declarative systems are the norm, GitOps workflows are the default way teams manage and deploy change. Many organizations now implement these patterns without explicitly calling them “GitOps,” just as few teams today explicitly say they do “CI/CD” even though continuous pipelines are taken for granted. The term has become less prominent in marketing, but its practices are often embedded in pipelines, controllers, and platform tooling.

That normalization shows up in how GitOps workflows are woven into broader operational frameworks. For example, platform engineering teams frequently build internal developer platforms that encapsulate GitOps patterns behind standardized developer APIs, making the pattern invisible to most application teams while still providing the auditability and automation that GitOps promises.

GitOps beyond Kubernetes: infrastructure, policy, and drift

While GitOps first gained traction as a way to manage Kubernetes deployments, its core principles apply broadly to infrastructure and operational concerns beyond any single orchestration platform. GitOps treats desired state as declarative configuration stored in version control and uses automated reconciliation to ensure running systems align with that state. That pattern naturally extends to infrastructure provisioning, policy enforcement, configuration drift detection, and governance workflows across diverse environments.

In modern operational stacks, infrastructure is increasingly defined declaratively, whether through Kubernetes manifests, Terraform modules, or other infrastructure-as-code formats. Storing these declarations in version control enables the same peer-review, auditability, and rollback practices developers already use for application code. Automated tooling then continuously detects when the live infrastructure diverges from the declared state and works to bring it back into alignment, reducing the risk of configuration drift and inadvertent misconfigurations.

Configuration drift — the state where an environment has diverged from what’s declared in version control — remains a major operational headache, especially in complex, dynamic systems. Drift can arise from ad hoc fixes, emergency updates, or manual changes made outside normal pipelines, and it can lead to inconsistencies, outages, and security gaps. By continually checking running systems against the desired state in Git and reconciling deviations automatically, GitOps workflows help teams keep environments predictable and auditable.

Policy enforcement and compliance are another natural extension of GitOps patterns. As organizations adopt declarative practices, policy-as-code engines and drift detection systems can be woven into GitOps pipelines to validate that proposed configurations meet security, compliance, or operational standards before they’re ever applied to running systems. Embedding policy checks into declarative workflows brings consistency to governance while preserving the automation and speed that devops teams expect.

GitOps – beyond Kubernetes

GitOps began as a way to bring devops discipline to Kubernetes operations, but its longer-term impact has been more subtle. In many ways, it’s been absorbed into the fabric of modern cloud-native operations, where declarative configuration, version control, and automated reconciliation are taken for granted. Today, GitOps is less about a specific set of tools or a named practice and more about an operational mindset. By treating infrastructure and configuration as versioned, auditable artifacts and relying on automation to enforce consistency, GitOps helps teams manage complexity at scale. Even as the term itself fades from the spotlight, the practices it introduced continue to shape how distributed systems are built, deployed, and operated.

(image/jpeg; 0.43 MB)

React tutorial: Get started with the React JavaScript library 15 Jan 2026, 1:00 am

Despite many worthy contenders, React remains the most popular front-end framework, and a key player in the JavaScript development landscape. React is the quintessential reactive engine, continually innovating alongside the rest of the industry. A flagship open source project at Facebook, React is now part of Meta Open Source. For developers new to JavaScript and web development, this tutorial will get you started with this vital technology.

React is not only a front-end framework, but is a component in full-stack frameworks like Next.js. Newer additions like React server-side rendering (SSR) and React server components (RSC) further blur the line between server and client.

Also see: Is the React compiler ready for primetime?

Why React?

React’s prominence makes it an obvious choice for developers just starting out with web development. It is often chosen for its ability to offer a smooth and encompassing developer experience (DX), which distinguishes it from frameworks like Vue, Angular, and Svelte. It could be said that React’s true “killer feature” is the perks that come with longstanding popularity: learning resources, community support, libraries, and developers are all plentiful in the React ecosystem.

Installing React

Real-world React requires running on the server with a build tool, which we will explore in the next section. But to get your feet wet, we can start out with an online playground. There are several high-quality playgrounds for React, including full-blown environments like StackBlitz or Codesandbox. For a quick taste, we will use PlayCode React.

When you first open it, PlayCode React gives you a basic layout like the one shown here:

A screenshot shows the layout of a basic Rwact JavaScript application.

Matthew Tyson

The menu on the left is the file explorer, at the top is the code window, and at the bottom are the console (on the left) and the preview pane (on the right).

From this screenshot, you can see how the content of the code is displayed on the preview pane, but this basic layout doesn’t use any variables (or “state,” as it’s known in React). It does let you see some of the plumbing, like the React library import and the exported App function.

Modern React is functional. The App function has a return value that is the actual output for the component. The component’s return is specified by JSX, a templating language that lets you use HTML along with variables and JavaScript expressions. Right now, the app just has some simple markup.

The classic example you see next is a “Counter” that lets you increase and decrease a displayed value using buttons. We’ll do a slight “Spinal Tap” variation of this, where the counter only goes to 11 and displays a message:

A screenshot of a counter app developed in React.

Matthew Tyson

You can take a look at the running example here, and the full code for the example is below:

import React, { useState } from 'react';

export function App() {
  // 1. The State
  const [volume, setVolume] = useState(0);

  return (
    

Spinal Tap Amp 🎸

{/* 2. The "View" (Displaying the state) */}
{volume}
{/* 3. The Actions */
{/* 4. Conditional */} {volume === 11 &&

"Why don't you just make ten louder?"

}
); }

If you play with the example, you’ll see that moving the buttons changes the value, and the display automatically reflects the change. This is the essential magic of a reactive engine like React. The state is a managed variable that React automatically updates and displays. State is declared like so:

const [volume, setVolume] = useState(0);

The syntax is a bit funky if you are coming from straight JavaScript, but most developers can adapt to it quickly. Basically, useState(0) says, with a default value 0, give me a variable, volume, and a function to set it, setVolume.

To display the value in the view, we use: {volume}.

To modify the value, we use button event handlers. For example, to increment, we’d do:

To modify the value, we use buttons event handlers.  For example, to increment:

onClick={() => setVolume(volume + 1)

Here we’ve directly modified the volume state, and React will update accordingly. If we wanted to, we could call a function (for example, if the logic were more involved).

Finally, when the value reaches 11, we display a message. This syntax is idiomatic React, and uses an embedded JavaScript equality check:

{volume === 11 &&
  

"Why don't you just make ten louder?"

}

The check says, if volume is 11, then display the

markup.

Using a build tool with React

Once upon a time, when NVIDIA was nothing but a graphics card, it was quite a bit of work assembling a good build chain for React. These days, the process is much simpler, and the once ubiquitous create-react-app option is no more. Vite is now the standard choice for launching a new app from the React terminal, so that’s the approach you’ll learn here.

With that said, there are a few alternatives worth mentioning. VS Code has extensions that will provide you with templates or scaffolding, but what’s becoming more common is using an AI coding assistant. A tool like Copilot, ChatGPT, or Gemini can take a prompt describing the basics of the application in question, including the instruction to use React, and produce a basic React layout for you. AI assistants are available in both command-line and VS Code extension flavors. Or, for an even more forward-looking option, you could use something like Firebase Studio.

But enough about alternatives—Vite is the standard for a reason. It is repeatable, capable, and fast. To launch a new Vite app, you just enter the following in your command line:

$ npm create vite@latest

The interactive tool will walk you through the process, starting with selecting React as your technology:

A screenshot of the Vite CLI showing the option to select React.

Matthew Tyson

Use your own preferences for the other options (like using TypeScript versus JavaScript) and accept the option to install and launch the app immediately. Afterward, you’ll see a simple demo like this one:

A screenshot showing the Vite demo app built with React.

Matthew Tyson

The demo app has a counter component like the one we built earlier. If you Ctrl-c (or Cmd-c) to kill the Vite process running in the terminal, you can cd into the new directory. From there, you can see where the counter component is defined, in src/App.jsx (or App.tsx if you have selected TypeScript like I have).

It’s worth looking at that file to see how React appears on the server:

src/App.tsx
import { useState } from 'react'
import reactLogo from './assets/react.svg'
import viteLogo from '/vite.svg'
import './App.css'

function App() {
  const [count, setCount] = useState(0)

  return (
    
      
      

Vite + React

Edit src/App.tsx and save to test HMR

Click on the Vite and React logos to learn more

> ) } export default App

Notice we export the App as a module, which is used by the src/main.tsx file to display the component in the view. That file creates the bridge between the respective worlds of React and HTML:

import { StrictMode } from 'react'
import { createRoot } from 'react-dom/client'
import './index.css'
import App from './App.tsx'

createRoot(document.getElementById('root')!).render(
  
    
  ,
)

Don’t worry too much about the details of how React bootstraps itself with createRoot and the render call (which you won’t have to interact with on a regular basis). The important thing is how the App component is imported and then used with the JSX.

Note

Strict mode adds warning during dev mode to help you catch component bugs early.

There are a few rules to bear in mind when using JSX, the templating language of React:

  • HTML elements are lowercase (
    , ), but components are uppercase (, ).
  • You can’t just type “class” in JSX; instead, use className; e.g.,
    .
  • To access the realm of JavaScript (and the application state) from within JSX, use curly braces: {2 + 2 != 5}.
  • React components and props

    The main organizational concept in React is the component. Components are used to contain the functionality for a part of the view within a self-contained package. We’ve seen a component in action already with but it might be a little obscure, so let’s add another simple component to enhance the demonstration. This component also lets us explore another key part of React: Props.

    To start, let’s create a display of the counter value influenced by the Rob Reiner movie This Is Spinal Tap. To start, we create a new file at src/VolumeDisplay.jsx:

    // src/VolumeDisplay.jsx
    
    export function VolumeDisplay({ level }) {
      return (
        
    {/* The Dial */}
    = 11 ? '#d32f2f' : '#f0f0f0', color: level >= 11 ? 'white' : 'black', transition: 'all 0.2s ease' }}> {level}
    {/* The Message */} {level >= 11 && (

    "These go to eleven." 🤘

    )}
    ); }

    This is a simple display but there are a couple of things worth noting about it.

    One is that we accept a prop (a property) “from above” with VolumeDisplay({ level }). This tells whatever parent component uses this one that VolumeDisplay accepts a single property, called level. VolumeDisplay uses the property by displaying it (though it adds a bit of fancying up using conditional logic like we have already seen).

    The way we define the CSS values, inside the double braces, {{ }}, and as a map of value is idiomatic React. (It isn’t essential at this point to grasp why it works that way, but basically, it is the JSX token { } with a JavaScript map of CSS values using JavaScript-friendly camel-cased names, like justifyContent.)

    Now, to utilize this component, we can go to App.jsx, and make two changes:

    import { useState } from 'react'
    import reactLogo from './assets/react.svg'
    import viteLogo from '/vite.svg'
    import './App.css'
    // 1. Import our new component
    import { VolumeDisplay } from './VolumeDisplay'
    
    function App() {
      const [count, setCount] = useState(0)
    
      return (
        
          
          

    Vite + React

    {/* 2. Pass the 'count' state into the 'level' prop */}

    Edit src/App.tsx and save to test HMR

    Click on the Vite and React logos to learn more

    > ) } export default App

    Here, we’ve done two things: imported the new component and used it in the view.

    Notice, also, that the line passes the existing count state variable into VolumeDisplay as a prop. React will do the work of ensuring that whenever count changes, the VolumeDisplay will also be updated, including any dependent logic such as the conditional statements.

    Now, if we run the code like so:

    $ npm run dev

    We get what you see in the screenshot below:

    A screenshot of the running demo app built with Vite and React.

    Matthew Tyson

    Conclusion

    The world is now your oyster, at least within the realm of JavaScript web development. Not only is React wildly popular, its basic ideas are applicable to a host of other innovative frameworks, including Svelte and Solid. (To get some idea of the alternatives, just type npm create vite@latest and look at all the available technologies.) Now that you have a basic introduction, a good next step for learning would be to add an control that allows typing in the volume manually. Happy coding!

(image/jpeg; 11.59 MB)

From typos to takeovers: Inside the industrialization of npm supply chain attacks 14 Jan 2026, 11:48 pm

A massive surge in attacks on the npm ecosystem over the past year reveals a stark shift in the software supply‑chain threat landscape.

What once amounted to sloppy typosquatting attempts has evolved into coordinated, credential-driven intrusions targeting maintainers, CI pipelines, and the trusted automation that underpins modern development.

For security leaders, these aren’t niche developer mishaps anymore — they’re a direct pathway into production systems, cloud infrastructure, and millions of downstream applications.

The goal is no longer to trick an individual developer, but to quietly inherit their authority. And with it, their distribution reach.

“NPM is an attractive target because it is the world’s largest JavaScript package repository and a key control point for distributing software,” said Melinda Marks, cybersecurity practice director at Enterprise Security Group. “Security teams need an understanding of dependencies and ways to regularly audit and mitigate risk.”

Structural weaknesses in the npm infrastructure

Nearly every enterprise relies on npm, whether directly or indirectly. According to IDC, 93% of organizations use open-source software, and npm remains the largest package registry in the JavaScript ecosystem. “Compromising a single popular package can immediately reach millions of downstream users and applications,” IDC’s research manager (DevSecOps), Katie Norton, said, turning one stolen credential into what she described as a “master key” for distribution.

That scale, however, is only part of the risk.

The exposure is amplified by structural weaknesses in how modern development pipelines are secured, Norton remarked. “Individual open-source maintainers often lack the security resources that enterprise teams rely on, leaving them susceptible to social engineering,” she said. “CI/CD runners and developer machines routinely process long-lived secrets that are stored in environment variables or configuration files and are easily harvested by malware.”

“Build systems also tend to prioritize speed and reliability over security visibility, resulting in limited monitoring and long dwell times for attackers who gain initial access,” Norton added.

While security leaders can’t patch their way out of this one, they can reduce exposure. Experts consistently point to the same priorities: treating CI runners as production assets, rotating and scoping publish tokens aggressively, disabling lifecycle scripts unless required, and pinning dependencies to immutable versions.

“These npm attacks are targeting the pre-install phase of software dependencies, so typical software supply chain security methods of code scanning cannot address these types of attacks,” Marks said. Detection requires runtime analysis and anomaly detection rather than signature-based tooling.

From typo traps to legitimate backdoors

For years, typosquatting defined the npm threat model. Attackers published packages with names just close enough to popular libraries, such as “lodsash,” “expres,” “reacts,” and waited for automation or human error to do the rest. The impact was usually limited, and remediation straightforward.

That model began to break in 2025.

Instead of impersonating popular packages, attackers increasingly compromised real ones. Phishing campaigns spoofing npm itself harvested maintainer credentials. Stolen tokens were then used to publish trojanized updates that appeared legitimate to every downstream consumer. The Shai-Hulud campaign illustrated the scale of the problem, affecting tens of thousands of repositories and leveraging compromised credentials to self-propagate across the ecosystem.

“The npm ecosystem has become the crown jewels of modern development,” said Kush Pandya, a cybersecurity researcher at Socket.dev. “When a single prolific maintainer is compromised, the blast radius spans hundreds of downstream projects.”

The result was a quiet but powerful shift: attackers no longer needed to create convincing fakes. They could ship malware through trusted channels, signed and versioned like any routine update.

Developer environments over developer laptops

Modern npm attacks increasingly activate inside CI/CD environments rather than on developer laptops. Post-install scripts, long treated as benign setup helpers, became an execution vector capable of running automatically inside GitHub Actions or GitLab CI. Once inside a runner, malicious packages could read environment variables, steal publish tokens, tamper with build artifacts, or even push additional malicious releases under the victim’s identity.

“Developer environments and CI runners are now worth more than end-user machines,” Pandya noted. “They usually have broader permissions, access to secrets, and the ability to push code into production.”

Several campaigns observed in mid-2025 were explicitly CI-aware, triggering only when they detected automated build environments. Some included delayed execution or self-expiring payloads, minimizing forensic visibility while maximizing credential theft.

For enterprises, this represents a fundamental risk shift. CI systems often operate with higher privileges than any individual user, yet are monitored far less rigorously. “They are often secured with weaker defaults: long-lived publish tokens, overly permissive CI secrets, implicit trust in lifecycle scripts and package metadata, and little isolation between builds,” Pandya noted.

According to IDC Research, organizations allocate only about 14% of AppSec budgets to supply-chain security, with only 12% of them identifying CI/CD pipeline security as a top risk.

Evasion as a first-class feature

As defenders improved at spotting suspicious packages, attackers adapted too.

Recent npm campaigns have used invisible Unicode characters to obscure dependencies, multi-stage loaders that fetch real payloads only after environment checks, and blockchain-hosted command-and-control (C2) references designed to evade takedowns. Others deployed worm-like behavior, using stolen credentials to publish additional malicious packages at scale.

Manual review has become largely ineffective against this level of tradecraft. “The days when you could skim index.js and spot a malicious eval() are gone,” Pandya said.

“Modern packages hide malicious logic behind layers of encoding, delayed execution, and environment fingerprinting.” Norton echoed the concern, noting that these attacks operate at a behavioral level where static scanning falls short. “Obfuscation techniques make malicious logic difficult to distinguish from legitimate complexity in large JavaScript projects,” she said. “CI-aware payloads and post-install scripts introduce behavior that only manifests under specific environmental conditions.”

(image/jpeg; 5.22 MB)

Compose Multiplatform brings auto-resizing to interop views 14 Jan 2026, 3:24 pm

JetBrains has released Compose Multiplatform 1.10.0, the latest version of the Kotlin-based declarative framework for building shared UIs across multiple platforms. Unveiled January 13, the update supports automatic resizing for native interop elements on both desktop and iOS deployments.

Resizing of these elements means they now can adapt their layout to their content, eliminating the need to calculate exact sizes manually and specify fixed dimensions in advance. On the desktop,SwingPaneladjusts its size based on the embedded component’s minimum, preferred, and maximum sizes. For iOS, UIKit interop views now support sizing according to the view’s fitting size (intrinsic content size). This enables proper wrapping of SwiftUI views (via UIHostingController) and basic UIView subclasses that do not depend on NSLayoutConstraints.

Instructions on getting started with Compose Multiplatform can be found at kotlang.org. Compose Multiplatform is an optional UI framework built atop Kotlin Multiplatform technology, for building applications for different platforms and reusing code. Compose Multiplatform applications will run on iOS, Android, macOS, Windows, Linux, and the web.

Also in version 1.10.0, Compose Multiplatform now uses the Web Cache API to cache successful responses for static assets and string resources. This avoids the delays associated with the browser’s default cache, which validates stored content through repeated HTTP requests and can be slow on low-bandwidth connections. The cache is cleared on every app launch or page refresh to ensure resources remain consistent with the application’s current state. This capability is an experimental feature.

Other improvements in Compose Multiplatform 1.10.0 include:

  • The Compose Hot Reload plugin now is bundled with the Compose Multiplatform Gradle plugin. Users no longer need to configure the Hot Reload plugin separately, as it is enabled by default for Compose Multiplatform projects targeting desktop.
  • The approach to previews has been unified across platforms. Developers can now use the androidx.compose.ui.tooling.preview.Preview annotation in thecommonMainsource set. Other annotations, such as org.jetbrains.compose.ui.tooling.preview.Previewand the desktop-specific androidx.compose.desktop.ui.tooling.preview.Preview, have been deprecated.
  • Navigation 3, a new library for managing navigation, is now supported on non-Android targets.
  • The following properties inDialogPropertieshave been promoted to stable and are no longer experimental: usePlatformInsets, useSoftwareKeyboardInset, and scrimColor. Similarly, the usePlatformDefaultWidth and usePlatformInsetsproperties in PopupProperties have also been promoted to stable.
  • The deprecation level for Popup overloads without the PopupProperties parameter has been changed to ERROR to enforce the use of the updated API.
  • For iOS, Compose Multiplatform now supports WindowInsetsRulers, which provides functionality to position and size UI elements based on window insets, such as the status bar, navigation bar, or on-screen keyboard.

(image/jpeg; 4.48 MB)

Output from vibe coding tools prone to critical security flaws, study finds 14 Jan 2026, 12:08 pm

Popular vibe coding platforms consistently generate insecure code in response to common programming prompts, including creating vulnerabilities rated as ‘critical,’ new testing has found.

Security startup Tenzai’s top-line conclusion: the tools are good at avoiding security flaws that can be solved in a generic way, but struggle where what distinguishes safe from dangerous depends on context.

The assessment, which it conducted in December 2025, compared five of the best-known vibe coding tools — Claude Code, OpenAI Codex, Cursor, Replit, and Devin — by using pre-defined prompts to build the same three test applications.

In total, the code output by the five tools across 15 applications (three each) was found to contain a total of 69 vulnerabilities. Around 45 of these were rated ‘low-medium’ in severity, with many of the remainder rated ‘high’ and around half a dozen ‘critical’.

While the number of low-medium vulnerabilities was the same for all five tools, only Claude Code (4 flaws), Devin (1) and Codex (1) generated critical-rated vulnerabilities.

The most serious vulnerabilities concerned API authorization logic (checking who is allowed to access a resource or perform an action), and business logic (permitting a user action that shouldn’t be possible), both important for e-commerce systems.

“[Code generated by AI] agents seems to be very prone to business logic vulnerabilities. While human developers bring intuitive understanding that helps them grasp how workflows should operate, agents lack this ‘common sense’ and depend mainly on explicit instructions,” said Tenzai’s researchers.

Offsetting this, the tools did a good job of avoiding common flaws that have long plagued human-coded applications, such as SQLi or XSS vulnerabilities that are both still prominently featured in the OWASP Top 10 list of web application security risks.

“Across all the applications we developed, we didn’t encounter a single exploitable SQLi or XSS vulnerability,” said Tenzai.

Human oversight

The vibe coding sales pitch is that it automates everyday programming jobs, boosting productivity. While this is undoubtedly true, Tenzai’s test shows that the idea has limits; human oversight and debugging are still needed.

This isn’t a new discovery. In the year since the concept of ‘vibe coding’ was developed, other studies have found that, without proper supervision, these tools are prone to introducing new cyber security weaknesses.

But it’s not simply that vibe coding platforms aren’t picking up security flaws in their code; in some cases, defining what counts as good or bad is simply impossible using general rules or examples.

“Take SSRF [Server-Side Request Forgery]: there’s no universal rule for distinguishing legitimate URL fetches from malicious ones. The line between safe and dangerous depends heavily on context, making generic solutions impossible,” said Tenzai. 

The obvious solution is that, having invented vibe coding agents, the industry should now focus on vibe coding checking agents, which, of course, is where Tenzai, a small startup not long out of stealth mode, thinks it has found a gap in the market for its own technology. It said, “based on our testing and recent research, no comprehensive solution to this issue currently exists. This makes it critical for developers to understand the common pitfalls of coding agents and prepare accordingly.”

Debugging AI

The deeper question raised by vibe coding isn’t how well tools work, then, but how they are used. Telling developers to keep eyes on vibe code output isn’t the same as knowing this will happen, any more than it was in the days when humans made all the mistakes.

“When implementing vibe coding approaches, companies should ensure that secure code review is part of any Secure Software Development Lifecycle and is consistently implemented,” commented Matthew Robbins, head of offensive security at security services company Talion. “Good practice frameworks should also be leveraged, such as the language-agnostic OWASP Secure Coding Practices, and language-specific frameworks such as SEI CERT coding standards.” 

Code should be tested using static and dynamic analysis before being deployed, Robbins added. The trick is to get debugging right. “Although vibe coding presents a risk, it can be managed by closely adhering to industry-standard processes and guidelines that go further than traditional debugging and quality assurance,” he noted.

However, according to Eran Kinsbruner, VP of product marketing at application testing organization Checkmarx, traditional debugging risks being overwhelmed by the AI era.

“Mandating more debugging is the wrong instinct for an AI-speed problem. Debugging assumes humans can meaningfully review AI-generated code after the fact. At the scale and velocity of vibe coding, that assumption collapses,” he said.

“The only viable response is to move security into the act of creation. In practice, this means agentic security must become a native companion to AI coding assistants, embedded directly inside AI-first development environments, not bolted on downstream.”

This article originally appeared on CSOonline.

(image/jpeg; 0.29 MB)

Chinese AI firm trains state-of-the-art model entirely on Huawei chips 14 Jan 2026, 7:04 am

Chinese company Zhipu AI has trained image generation model entirely on Huawei processors, demonstrating that Chinese firms can build competitive AI systems without access to advanced Western chips.

The model, released on Tuesday, marks the first time a state-of-the-art multimodal model completed its full training cycle on Chinese-made chips, Zhipu said in a statement. The Beijing-based company trained the model on Huawei’s Ascend Atlas 800T A2 devices using the MindSpore AI framework, completing the entire pipeline from data preprocessing through large-scale training without relying on Western hardware.

The achievement carries strategic significance for Zhipu, which the US Commerce Department last year added to a list of entities acting contrary to US national security or foreign policy interests over its alleged ties to China’s military. The designation effectively cut the company off from Nvidia’s H100 and A100 GPUs, which have become standard for training advanced AI models, forcing Chinese firms to develop alternatives around domestic chip architectures.

Followign that listing, Zhipu began collaborating with Huawei on GLM-Image. Huawei’s Ascend processors have become the primary alternative for Chinese AI companies restricted from purchasing Nvidia’s hardware. The model’s successful training on Ascend chips provides a data point that Chinese firms can develop competitive AI systems despite restricted access to Western chips.

“This proves the feasibility of training high-performance multimodal generative models on a domestically developed full-stack computing platform,” Zhipu’s statement added.

Zhipu has made GLM-Image available through an API for 0.1 yuan (approximately $0.014) per generated image. The company released the model weights on GitHub, Hugging Face, and ModelScope Community for independent deployment.

The pricing positions GLM-Image as a cost-effective option for enterprises generating marketing materials, presentations, and other text-heavy visual content at scale.

Technical approach and benchmark performance

GLM-Image employs a hybrid architecture combining a 9-billion-parameter autoregressive model with a 7-billion-parameter diffusion decoder, according to Zhipu’s technical report. The autoregressive component handles instruction understanding and overall image composition, while the diffusion decoder focuses on rendering fine details and accurate text.

The architecture addresses challenges in generating knowledge-intensive visual content where both semantic understanding and precise text rendering matter, such as presentation slides, infographics, and commercial posters.

On the CVTG-2K benchmark, which measures accuracy in placing text across multiple image locations, GLM-Image achieved a Word Accuracy score of 0.9116, ranking first among open-source models. The model also led the LongText-Bench test for rendering extended text passages, scoring 0.952 for English and 0.979 for Chinese across eight scenarios including signs, posters, and dialog boxes.

The model natively supports multiple resolutions from 1024×1024 to 2048×2048 pixels without requiring retraining, the report added.

Hardware optimization strategy

Training GLM-Image on Ascend hardware required Zhipu to develop custom optimization techniques for Huawei’s chip architecture. The company built a training suite that implements dynamic graph multi-level pipelined deployment, enabling different stages of the training process to run concurrently and reducing bottlenecks.

Zhipu also created high-performance fusion operators compatible with Ascend’s architecture and employed multi-stream parallelism to overlap communication and computation operations during distributed training. These optimizations aim to extract maximum performance from hardware that operates differently from the Nvidia GPUs most AI frameworks target by default.

The technical approach validates that competitive AI models can be trained on China’s domestic chip ecosystem, though at what cost in development time and engineering effort remains unclear.

Zhipu did not say how many processors or how long it took to train its model, nor how the requirements compared to equivalent Nvidia-based systems.

Implications for global AI development

For multinational enterprises operating in China, GLM-Image’s training on domestic hardware provides evidence that Chinese AI infrastructure can support state-of-the-art model development. Companies with Chinese operations may need to evaluate whether to develop strategies around platforms like Huawei’s Ascend and frameworks like MindSpore.

The release comes as Chinese companies invest in domestic AI infrastructure alternatives. Whether export controls will slow or accelerate the development of parallel AI ecosystems remains a subject of policy debate.

(image/jpeg; 2.66 MB)

Page processed in 0.038 seconds.

Powered by SimplePie 1.3, Build 20180209064251. Run the SimplePie Compatibility Test. SimplePie is © 2004–2026, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.