
KUTIC Insights
Betting on the Machines: Navigating the AI Investment Boom
By Luke Sloman
Published July 3rd
Foreword
For this week’s post, I wanted to step away from our usual industry deep-dives and look at what is arguably the single biggest macro-theme shaping business headlines today: Artificial Intelligence (AI). Over the past two years, AI has morphed from an academic curiosity and a backroom research topic to a central pillar of boardroom strategy and stock market euphoria. It is not an exaggeration to say AI is driving a historic reallocation of capital, one that cuts across chips, cloud computing, big tech platforms, startups, and even energy grids.
In Japan, we often talk about robotics, automation, and machine learning in manufacturing, but the scope of the AI boom goes far beyond factory floors. From Nvidia’s trillion-dollar rally to start-ups training new generative models every week, investors are trying to figure out the same question: where is the real money in AI going to be made? And how can we separate enduring value from the froth that always comes with a new wave of technological hype?
This report aims to answer that question in five parts: why the AI boom has arrived now, how the hardware supply chain has become its first big winner, who is fighting for control of the software layer, where the cracks and bottlenecks might appear, and finally, how we should think about AI as a long-term investment theme, especially in an era where geopolitics, regulation, and hardware constraints will likely shape the next phase.
Why AI’s Moment Has Finally Arrived
Artificial Intelligence is not a new dream. The term has existed since the 1950s, and there have been repeated “AI winters” whenever the technology overpromised and underdelivered. What makes the current wave different is a unique convergence of computing power, big data availability, and generative breakthroughs that have made the power of modern AI visible and usable to ordinary people, not just PhDs.
Arguably, the single biggest catalyst was the explosion of Large Language Models (LLMs) like OpenAI’s GPT-3 and GPT-4, which demonstrated that AI could generate surprisingly human-like text, write code, summarise legal documents, and even mimic creative tasks that were once thought safe from automation. Suddenly, AI wasn’t just about backend analytics or chatbots; it was about replacing or enhancing entire workflows. That concrete “wow” factor is what triggered the sudden rush of capital into AI infrastructure and applications.
Supporting this leap is a foundation of hardware muscle that did not exist at scale fifteen years ago. Graphics Processing Units (GPUs), initially designed for video games, turned out to be perfect for training deep neural networks. Companies like Nvidia, once known mainly for powering gaming PCs, now sit at the centre of a massive AI compute boom.
Combined with the explosion of cloud infrastructure and the flood of data generated by everything from smartphones to IoT devices, the conditions were right. As tech history shows, when cost curves collide with an obvious use case, the result is a gold rush, and that’s exactly what we’re living through now.
The Hardware Gold Rush: GPUs, Chips, and Power
If there is one clear place where AI’s early profits are being minted, it is in the hardware layer. Nvidia has become the poster child for this transformation. Once a niche gaming chipmaker, it is now the single most important supplier of the GPUs that power nearly every major AI model on earth. Its flagship H100 chips and the upcoming Blackwell generation are the workhorses behind OpenAI’s GPT models, Google’s Gemini, Anthropic’s Claude, Meta’s LLaMA, and countless others.
The scale is staggering. Top-end AI chips now sell for $30,000 to $50,000 apiece, and demand so far exceeds supply that delivery times stretch up to a year. Nvidia’s dominance is protected by its CUDA software ecosystem, a programming environment that makes switching to rival chips more difficult than it might appear on paper.
But Nvidia does not stand alone. Beneath it lies a supply chain that has quietly become one of the most geopolitically sensitive in the world. Taiwan Semiconductor Manufacturing Company (TSMC) fabricates the most advanced chips for Nvidia and AMD, and its cutting-edge processes are why these chips are possible in the first place. ASML, a Dutch company, makes the extreme ultraviolet (EUV) lithography machines needed to etch these microscopic circuits, with each EUV unit costing around $200 million and taking months to produce.
Memory makers like SK Hynix and Micron benefit too because AI training workloads need high-bandwidth DRAM in massive quantities. Data centre operators, from Equinix to Digital Realty, are scrambling to expand server farm capacity to keep up. Even seemingly dull sectors like utilities are seeing ripples, as data centres guzzle enormous amounts of electricity, creating local power crunches in regions like Virginia, Texas, and parts of Asia.
Put simply, the hardware layer has been the most straightforward way for investors to play the AI boom so far. The margin structures are strong, demand is visible, and geopolitical competition makes it likely that governments will pour even more subsidies into domestic semiconductor capacity, as seen with Japan’s own Rapidus project and the US CHIPS Act.
The Software & Platform Wars: Who Controls the AI Layer?
While the picks-and-shovels layer is minting money now, the big prize over the next decade is likely to be the software and platform layer, the companies that own the models, data, and user ecosystems.
Microsoft has staked an early lead here. Its multibillion-dollar partnership with OpenAI has embedded ChatGPT into its entire product suite, Office, Windows, and its Azure cloud. This has given Microsoft a dual advantage: it can monetise AI tools directly through subscription tiers and attract cloud customers who want OpenAI’s APIs. Google, feeling the competitive heat, has gone all-in with Gemini, betting that its deep AI research bench can translate to commercially viable search and productivity integrations. Amazon is playing a slightly different game by focusing on providing the infrastructure (via AWS) for companies to train and deploy their own models.
Meanwhile, dozens of well-funded startups like Anthropic, Cohere, Mistral, and Perplexity are trying to challenge the Big Tech incumbents. Many are building rival LLMs, some are offering open-source alternatives, and others are targeting narrow verticals like healthcare, legal, or enterprise automation. However, the economics are brutal. Training and operating a frontier model requires compute budgets that only a handful of companies can realistically afford, meaning many startups will remain dependent on cloud giants for both compute and distribution.
There is also a new breed of “applied AI” companies, like Adobe or Salesforce, which are weaving AI into their core business offerings. Here, the value is not in the underlying model but in how well AI is used to automate tasks that were once manual, from generating marketing content to streamlining sales workflows. For now, the jury is out on whether customers will pay enough of a premium for these AI features to justify the billions invested.
Cracks Beneath the Hype: Bottlenecks, Regulations, and Bubble Fears
As with every gold rush, the AI boom is not immune to excess. One real constraint is physical: chip supply is limited, lead times for building new fabs are long, and the energy footprint of training large models is massive. Some estimates suggest the AI industry could double global data centre power demand by 2030, raising uncomfortable questions about sustainability and grid stability.
There is also the problem of “AI hallucinations”, generative models confidently producing wrong or fabricated information. For business-critical applications like law, medicine, or autonomous driving, this is a non-trivial risk that must be solved if AI is to deliver on its loftiest promises.
Regulators are also waking up. The EU’s new AI Act sets out strict guidelines for what AI systems can and cannot do, especially in high-risk areas like biometric surveillance or credit scoring. The US, Japan, and China are all developing their own frameworks. There is a fine balance between encouraging innovation and protecting consumers, but heavy-handed rules could slow adoption or raise compliance costs.
Finally, there is the risk that the investment story runs too far ahead of real economics. Many AI-adjacent stocks have soared on multiple expansion rather than clear evidence of durable profits. History is littered with examples of transformative technologies; the dot-com era is the obvious parallel, where early winners stumbled once expectations outpaced cash flows.
The Takeaway: What Does Smart AI Investing Look Like?
For investors today, AI is likely to remain an unstoppable force shaping global markets for the next decade and beyond. But separating hype from substance will be critical. Right now, the hardware layer, Nvidia, TSMC, and ASML, looks most defensible because the physical barriers to entry are high and the demand signals are clear. Big Tech is probably next in line, with Microsoft in particular well-positioned to monetise AI across consumer, enterprise, and infrastructure layers.
The startup space will produce a few breakouts but many flameouts. Investors should be cautious of sky-high valuations without clear monetisation paths. Vertical AI, solving industry-specific problems, might be the more promising angle than trying to compete directly with OpenAI or Google.
AI will also shape industries we cover week to week, from autonomous vehicles and mobility robotics in Honda’s case to smart networks in telecom. It will force legacy companies to adapt, merge, and partner in ways that blur old lines between hardware, software, and services.
For now, the smartest AI bets are the companies quietly laying the groundwork: the ones building picks and shovels, the ones embedding AI into sticky workflows, and the ones with the balance sheets to weather the inevitable hype cycle crash that follows any technological mania.
One thing is clear: the machines aren’t just coming, they’re already here. The only question left is who really profits when the dust settles.
Disclaimer: This content is for informational and educational purposes only and does not constitute financial, investment, or other professional advice. The views expressed are our own and do not reflect the views of any institution we may be affiliated with. We are not licensed financial advisors, and nothing in this publication should be interpreted as a recommendation to buy or sell any securities. Please do your own research or consult a licensed professional before making any investment decisions.