AI and APIs

Beyond Bolt-On AI: A New Way to Build Software

23views

In the rapidly evolving world of technology, Artificial Intelligence (AI) is no longer just an add-on feature—it is fundamentally transforming how software is built and architected. Alan Hsiao, founder of Cognitivo and senior visiting fellow at the University of New South Wales in the School of Computer Science and Engineering, offers a compelling vision of this transformation. His insights reveal the profound shift from traditional software development to AI-driven programming, highlighting the architectural changes required to build smarter, more adaptive software systems that meet the demands of tomorrow’s intelligent ecosystems.

 

The Evolution of Software: From Software 1.0 to Software 2.0 and Beyond

Since the mid-20th century, software development has undergone several paradigm shifts. Initially, programming was done in low-level languages such as machine code and assembly. Over time, higher-level languages like C, C++, Java, and Python emerged, accompanied by software design patterns and development environments that enabled scaling and complexity management. This evolution led to microservices, DevOps practices, containerization, and infrastructure as code—all hallmarks of modern software engineering.

This is a new evolutionary step: the transition from Software 1.0 to Software 2.0, a term popularized by NVIDIA’s Jensen Huang. Software 1.0 is defined by deterministic logic—explicitly coded rules and if-else statements. In contrast, Software 2.0 is data-driven programming, where developers feed data into machine learning models that learn to create functions and behaviors autonomously. This shift is not merely a change in coding style; it represents a fundamental transformation in how software is conceived, built, and maintained.

This move towards data-driven programming requires new ecosystems and tooling akin to those developed for previous software generations. These include ML Ops (extending DevOps with continuous training), new foundational models that reduce coding effort, and semantic protocols for AI-to-AI communication. The ultimate goal is a higher level of abstraction that enables lower code development and significantly boosts productivity.

 

The Tremendous Economic Potential of AI-Driven Software

The economic impact of AI and related technologies is staggering. Alan highlights current market sizes:

  • Chips: $626 billion annually
  • Servers and Cloud Infrastructure: $930 billion
  • AI platforms: $60 billion
  • Data Warehousing: $45 billion
  • Foundational AI Models: $25 billion (as of 2024)

Combining these sectors, reports from McKinsey and Frontier Economics estimate the gross value add of AI-driven technologies to reach trillions of dollars. The real prize lies in boosting productivity and utilization across everyday industries, or what Alan calls “Main Street” of the economy. However, realizing this potential comes with significant technical and operational challenges.

Key Challenges in Building AI-Driven Software

1. The Separation of Transactional and Analytical Systems

One of the oldest and most persistent challenges in enterprise IT architecture is the divide between Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP). Transactional systems require strong data integrity and real-time updates, ensuring that data is consistent and accurate as it changes. Analytical systems, meanwhile, prioritize breadth and depth of data for insights, often aggregating information across time and organizational silos but usually operating in batch mode rather than real time.

This separation leads to inefficiencies and limitations. Transactional applications are typically not “smart” because they lack the analytical context, while analytical systems are too slow and disconnected from real-time operations. Furthermore, organizations often replicate data multiple times—copying from transactional databases to data lakes, then to data warehouses, and finally into analytical engines—resulting in costly data movement and limited functionality.

Attempting to mirror complex business logic across these duplicated datasets is virtually impossible, leading many data lake and warehouse projects to serve primarily as read-only reporting tools instead of dynamic, intelligent applications.

2. Limitations of AI SaaS Services and the “Bolt-On” Approach

Many enterprises today rely on AI-enabled SaaS products that provide specialized services like speech-to-text, computer vision, or analytics via APIs. While convenient, Alan points out that this “AI interfacing” approach only achieves limited automation—often just 5-10%—because these services operate on isolated snippets of data without full context.

Context is critical for insightfulness and true automation. Without understanding the broader customer situation, business environment, and domain-specific knowledge, AI services cannot deliver deep cognitive capabilities. Moreover, these AI SaaS services often function as black boxes, lacking transparency and human-in-the-loop mechanisms for refinement or compliance.

Satya Nadella’s provocative statement, “SaaS is dead,” emphasizes the need to move AI agents closer to the data, enabling efficient data sharing and orchestration rather than simply calling external AI APIs.

3. The Last Mile Problem and Domain-Specific Knowledge

Large Language Models (LLMs) like GPT are powerful generalists but struggle with domain-specific tasks without expensive fine-tuning. Alan uses the analogy of an English literature student trying to give financial advice—without domain knowledge, the advice will be ineffective.

To overcome this “last mile” problem, organizations must infuse AI models with domain knowledge beyond language. This involves integrating knowledge graphs and ontologies that codify facts, relationships, and rules relevant to specific industries or business functions.

4. Privacy, Responsible AI, and Compliance

AI models produce stochastic and probabilistic outcomes, making traceability and accountability challenging. Many AI services operate as black boxes, complicating human oversight and the enforcement of data privacy regulations such as the “right to be forgotten.”

Responsible AI requires architectures that enable transparency, human feedback, and control over data usage. These aspects are critical for regulatory compliance and maintaining trust in AI-powered applications.

Reimagining Software Architecture for AI: A Unified Approach

To truly harness AI’s potential, Alan advocates for a fundamental architectural shift that breaks down the silos between transactional, analytical, and AI domains. This involves several key principles:

Polyglot, Single-Version Data Layer with Asset Integrity

Instead of replicating data across multiple systems, the new approach calls for a unified data layer optimized for both read and write operations across diverse data types—structured, unstructured, document, vector embeddings, and knowledge graphs.

This data layer should be built on commodity object stores and lakehouse architectures that provide a curated, consistent “one version of the truth” with transactional integrity. This enables AI models and applications to operate on fresh, reliable data without costly data movement.

Multi-Model AI Integration: Perception, Prediction, and Generation

Alan describes three classes of AI models that should work in tandem over this shared data:

  • Perceptive AI Models: OCR, computer vision, speech-to-text—extracting information from the real world.
  • Predictive AI Models: Forecasting, planning, recommendation engines.
  • Generative AI Models: Language generation, image synthesis, and conversational agents.

These models, or “agents,” collaborate over the shared data and knowledge base to provide comprehensive intelligence and automation across business processes.

Human-in-the-Loop and Feedback-Driven Refinement

AI systems must incorporate human feedback seamlessly within workflows. Alan shares examples of applications that extract data from millions of documents with AI pre-selection, but allow humans to verify, correct, and provide bounding box annotations. This feedback is captured and used to continuously refine models, ensuring accuracy and trustworthiness.

This approach addresses the inefficiencies of traditional machine learning development, where data scientists spend a large portion of their time manually labeling data. Integrating UI with AI to capture intent and corrections dramatically improves productivity and model quality.

Semantic Integration with Knowledge Graphs and Ontologies

One of the most transformative ideas Alan presents is the integration of Large Language Models (LLMs) with knowledge graphs (KGs). Whereas LLMs operate on probabilistic word associations, knowledge graphs represent structured, hierarchical relationships between concepts that are both machine and human-readable.

This combination allows steering language models towards domain-specific reasoning without expensive fine-tuning. For example, pairing an LLM with a financial advice ontology enables an AI system to conduct unassisted interviews, generate predictive models, and offer tailored recommendations based on a user’s financial situation.

Moreover, the knowledge graph serves as a long-term, verifiable memory that can be updated with user inputs during conversations, overcoming the limited context window of LLMs.

Building the Future: AI-Powered Applications and Architectures

AI Maturity Model: From Batch to Real-Time Cognitive Systems

AI maturity model illustrates the evolution of AI adoption in organizations:

  1. Level 0: Traditional data lakes and warehouses with batch machine learning models.
  2. Level 1: Integration of AI and APIs with user interfaces, combining transactional and analytical data sources.
  3. Level 2: Cyber-physical integration, where AI, edge computing, IoT, and robotics collaborate autonomously with minimal human intervention.

This model highlights the trajectory towards fully integrated, real-time, intelligent software systems that blend digital and physical worlds.

Natural Language Interfaces and Abstract Reasoning

Moving beyond data ingestion and simple retrieval,  enabling AI systems is important to perform abstract thinking, planning, and reasoning—abilities that current language models lack. By converting natural language queries into structured queries (SQL, SPARQL), AI applications can reason over complex, heterogeneous data sources and ontologies.

For instance, answering a user’s financial planning question involves breaking down the query into domain-specific subqueries, invoking multiple predictive models, and arbitrating among their outputs based on context and user preferences.

An AI Architecture for the Future

Alan proposes a comprehensive architecture that includes:

  • Semantic interpretation of high-order intent from natural language.
  • Human feedback integration and continuous learning.
  • Automated extraction of real-world data into ontologies.
  • Continuous training and creation of new algorithms through observation when existing models are insufficient.
  • Hierarchical planning to evaluate and trust AI outputs based on user-defined goals.

This architecture supports what we call “Software 2.5,” combining AI/ML functions with inference and deduction capabilities to create a general-purpose, evolvable AI system.

Conclusion: The Dawn of AI-First Software Development

This vision challenges the traditional paradigm of software development. Moving beyond bolt-on AI services and deterministic coding, the future lies in data-driven, context-aware, and semantically integrated AI systems that collaborate with humans in a feedback-driven loop.

This shift from Software 1.0 to Software 2.0—and beyond—promises to unlock unprecedented productivity and business value. By unifying transactional and analytical data, leveraging multi-model AI agents, embedding domain knowledge through ontologies, and architecting intelligent systems with reasoning and planning capabilities, organizations can build truly smart software that adapts and evolves.

At Cognitivo, we have developed an AI Factory—a configurable platform that enables enterprises to build AI-powered applications on top of their data lakehouses and AI engines. This approach empowers businesses to take control of their AI journey, moving from experimentation to scalable, domain-specific AI solutions.

As AI continues to reshape the software landscape, embracing this new way of building software will be essential for organizations aiming to thrive in the intelligent age.

Alan Hsiao

Alan Hsiao

Founder & CEO of Cognitivo
Alan is the founder and CEO of Cognitivo and a Senior Visiting Fellow at UNSW. With nearly 20 years in IT, software, and banking, he specializes in AI-driven automation, data risk, and product origination. He also leads the Fintech AI Innovation Consortium, bridging industry and academia in financial technology innovation.

APIdays | Events | News | Intelligence

Attend APIdays conferences

The Worlds leading API Conferences:

Singapore, Zurich, Helsinki, Amsterdam, San Francisco, Sydney, Barcelona, London, Paris.

Get the API Landscape

The essential 1,000+ companies

Get the API Landscape
Industry Reports

Download our free reports

The State Of Api Documentation: 2017 Edition
  • State of API Documentation
  • The State of Banking APIs
  • GraphQL: all your queries answered
  • APIE Serverless Architecture