canyon

GenAI and the Engineering vs Design Chasm

The potential for Generative AI to radically transform industry cannot be overstated. As is normal in any step change in capability, new ecosystems and vendor offerings quickly form to leverage new technology to secure a headstart and gain market share.

But as we covered in a previous post back in 2020 about the challenge of introducing 5G, building innovative new products and services on top of technology platforms requires a degree of stability, reliability, and ubiquity. In the days of the world-wide web in the ‘late 1900s’, many software companies and developers worked hard to solve the engineering problems inherent in building web servers and browsers. It was only with the advent of HTML4, with dynamic HTML, AJAX, and standardisation of browser and server implementations that it became possible to build frameworks upon which service design innovation could be done. Had Meta (Facebook), Twitter, Instagram et al tried to build their web 2.0 applications in the 1990s, it would have simply been impossible – the web was still in its engineering phase, and not yet in a design phase that could support innovation.

Fast forward to 2023, and we are experiencing a similar stage of evolution with Generative AI. The explosive arrival of ChatGPT in November 2022, and OpenAI’s impressive work to offer API access, GPTs and CustomGPTs, has led many early adopters to believe that now is the time to seize a head start and build new products and services on these APIs. But as we have seen in the last 12 months, GenAI is still firmly in its engineering phase. There is no consensus on architecture, LLMs continue to evolve at pace, debates such as RAG vs fine-tuning to reduce hallucination and achieve domain alignment rage on, pricing and OpEx is widely variable, and last but not least, the regulatory and societal implications of generating material using models trained on copyrighted material have not been fully worked through. And let’s not remind ourselves of the extraordinarily amateur, even childish, behavior of the OpenAI board and its hokey cokey with Sam Altman – which for a bizarre week was potentially an existential threat to the company.

Despite these risks, 2023 saw an explosion in startup offerings and toolings which, one way or another, piggybacked off OpenAI’s ecosystem. This is courageous for several reasons:

  • Any ‘moat’ or unique value can quickly disappear, should OpenAI or other established large vendors add GenAI capabilities to their existing offerings. If your offering is, say, a custom GPT, or a sales CRM AI extension, or service desk chatbot, or a legal platform for querying documents, good luck trying to outrun OpenAI, Salesforce, Hubspot, ServiceNow, or Microsoft.
  • The relentless trend for the hyperscalers – AWS, Microsoft Azure, GCP et al – to add GenAI capabilities to their PaaS offerings will not slow down. If your offering is a useful developer tooling add-on, then expect it to be commoditized, or even open-sourced, within 12-18 months.
  • Once new approaches for GenAI architectures gain popularity, or faster, cheaper ways of fine-tuning / training your models become common-place, you may find that your early CapEx investment and OpEx costs may render your offerings out of date and uncompetitive, without costly and slow refactoring.
  • The pricing, availability, and performance (especially latency) of cloud-hosted LLM calls may vary, as engineering problems continue to be addressed. This instability and unpredictability can impact your service quality, not to mention GM and business viability.

So what should we do – wait another 12-18 months? Probably not, but caution, and awareness of the volatility of the current state of the art is essential. If building new services, some design principles are worth remembering, particularly the use of low coupling, modular architecture, and portability to different cloud environments. Treating an LLM as a teenager going through growing pains can be a helpful metaphor: assume that its personality will change, and its willingness to comply with instructions may tail off.

Taking an anti-fragile approach to your solution architecture, and GTM plans, spending a little more time explicitly validating your GTM business hypotheses, as well as considering PESTLE assumptions in addition to basic SWOT/MOST planning, will help minimise the impact of continued engineering upheavals.

By following a more cautious, robust approach, you may find that you are in even better shape to capture market share when the industry does truly enter its design phase – which unlike the days of the web, may be as soon as within the next 12 months.

Comments are closed.