<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Bonnefin Research]]></title><description><![CDATA[Ideas in Motion.]]></description><link>https://www.bonnefinresearch.com</link><generator>Substack</generator><lastBuildDate>Fri, 10 Apr 2026 06:02:05 GMT</lastBuildDate><atom:link href="https://www.bonnefinresearch.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Bonnefin Group]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[bonnefinresearch@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[bonnefinresearch@substack.com]]></itunes:email><itunes:name><![CDATA[Bonnefin Group]]></itunes:name></itunes:owner><itunes:author><![CDATA[Bonnefin Group]]></itunes:author><googleplay:owner><![CDATA[bonnefinresearch@substack.com]]></googleplay:owner><googleplay:email><![CDATA[bonnefinresearch@substack.com]]></googleplay:email><googleplay:author><![CDATA[Bonnefin Group]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Perils of Passive Investing]]></title><description><![CDATA[No Such Thing as a Free Lunch]]></description><link>https://www.bonnefinresearch.com/p/the-perils-of-passive-investing</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/the-perils-of-passive-investing</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Thu, 26 Feb 2026 00:00:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/946f7cb6-c6f7-469a-a9a2-42320a5c4e06_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There is a certain elegance to the passive investing thesis. Buy a broad market index, hold it indefinitely, ignore the noise, and let compounding do its work. It is simple, cheap, and backed by decades of data. For most people, particularly those without the time, temperament, and dedication to actively manage their portfolios properly, it is the default approach. But the label &#8216;passive investing&#8217;, particularly the &#8216;passive&#8217; part, can breed a dangerous complacency about what you are actually doing when you allocate capital to an index fund and look away.</p><h2>What Is Passive Investing?</h2><p>Passive investing is an investment strategy that seeks to replicate the returns of a market index rather than outperform it. In practice, this typically means buying a fund, such as an exchange-traded fund or mutual fund, that holds the same securities, in the same proportions, as a given benchmark. The S&amp;P 500, the MSCI World, and the FTSE 100 are among the most commonly tracked indices. The investor does not select individual securities, does not generally attempt to time the market, and does not rely on a fund manager&#8217;s judgement about which stocks are undervalued or overvalued. Instead, they accept the market&#8217;s collective verdict on price and simply own a slice of the whole.</p><p>The strategy is often contrasted with active investing, where a manager or individual investor makes deliberate bets, overweighting certain sectors, avoiding others, moving in and out of positions, in an effort to generate returns above the benchmark or within the confines of other investment mandate goals. Passive investing&#8217;s central claim is that, after fees and over time, most people are better off not trying active investment.</p><p>It is a claim the market has increasingly accepted. The growth of passive investing over the past three decades has been one of the most significant structural shifts in financial markets. In August 1996, twenty years after the launch of the first publicly available index fund, passive strategies represented just 6% of US-domiciled equity mutual fund and ETF assets. By 2010, that figure had risen to 19%. By the end of 2025, the shift had become overwhelming: according to the Investment Company Institute, US indexed mutual funds and ETFs held $19.26 trillion in assets, comfortably surpassing the $17.40 trillion held in actively managed funds. Globally, ETF assets alone reached a record $19.85 trillion by the end of 2025, having grown nearly 34% in a single year. Net inflows into global ETFs hit $2.37 trillion in 2025, shattering the previous record. In the US, the Vanguard S&amp;P 500 ETF overtook the SPDR S&amp;P 500 ETF Trust to become the world&#8217;s largest ETF, pulling in $137.7 billion in a single year, which was the largest annual inflow ever recorded for a single fund. The five-firm concentration ratio among US mutual fund families has risen from 35% in 2005 to 56% in 2023, driven in large part by the dominance of passive giants BlackRock, Vanguard, and State Street, who between them control roughly 60% of global ETF assets.</p><p>The Bogleheads (disciples of Vanguard founder Jack Bogle) have built an intellectual framework around passive investing. The core tenets are well established. Most managers underperform their benchmarks over meaningful time horizons, and the few who do outperform rarely do so consistently. The data here is damning: according to the S&amp;P Indices Versus Active (SPIVA) Scorecard, 65% of actively managed large-cap US mutual and ETF equity funds underperformed the S&amp;P 500 in 2024, broadly in line with the 64% average annual underperformance rate recorded since 2001. Extend the horizon and the picture worsens dramatically as over fifteen years, more than 90% of US large-cap active equity funds lag their benchmark, and over twenty years, no equity category saw majority active outperformance. The persistence data is equally bleak: among funds that ranked in the top quartile as of December 2020, not a single one remained there over the following four years. Costs compound just as returns do, and the fee differential between an index fund charging under 0.10% and an actively managed fund charging 0.5-1.00% is a near-certain drag on long-term performance. If professional fund managers struggle to beat the market and if you assume markets are broadly efficient enough that the average investor cannot reliably exploit mispricings, then any active investment management will erode returns far more than any benefits from stock selection could produce. It should be noted that this data excludes hedge funds whose industry returns are harder to judge as they have different fee structures, more privacy and a lack of industry standard data, and furthermore they often have different mandates and purposes than maximising pure returns.</p><p>The historical performance record reinforces this view. The S&amp;P 500 has delivered an annualised total return of approximately 12% over the past fifty years with dividends reinvested, or roughly 8% after adjusting for inflation. To put that in concrete terms, one thousand dollars invested at the beginning of 1975 would have grown to approximately $290,000 by the end of 2025 if allowed to compound the whole way at 12%. Roughly two in every three calendar years have produced positive returns, with the most common annual return falling between 10% and 20%. The investor who bought before the Global Financial Crisis and simply held on was made whole within a few years and went on to enjoy one of the longest bull markets in history. Over the past decade alone, the S&amp;P 500 has delivered an annualised return of nearly 14%, more than tripling an initial investment. Volatility, the Bogleheads argue, is the price of these superior long-term returns. Stay the course, stay diversified, keep costs low, and time will do the heavy lifting. This view has arguably the single most important financial insight available to the average person. But it is incomplete.</p><h2>Inaction Is Action</h2><p>Inaction is itself a form of action. The decision not to decide is still a decision. This applies directly to passive investing, because the word &#8220;passive&#8221; is doing an enormous amount of misleading.</p><p>When you buy an index fund, you are making a series of active choices. You are choosing to allocate capital to equities rather than bonds, property, cash, or any other asset class. You are choosing a specific index, most commonly the S&amp;P 500, and in doing so, you are choosing a particular weighting methodology, geographic exposure, and sector composition. You are choosing, consciously or unconsciously, to buy at today&#8217;s valuation, whatever that valuation happens to be. You are choosing to rebalance on the index provider&#8217;s schedule rather than your own. And you are choosing to hold through drawdowns of thirty, forty, even fifty per cent, which is itself one of the most psychologically demanding active decisions an investor can make.</p><p>None of this is passive in any meaningful sense. It is a specific, opinionated capital allocation strategy dressed to be passive. The investor who dollar-cost-averages into a market-cap-weighted US equity index is not avoiding decisions. They are making a concentrated bet that the largest American companies, purchased at whatever multiple the market assigns, will continue to compound wealth at rates that justify the price paid. That bet has been a good one historically, however, it is not a neutral one.</p><p>There is a deeper irony here that is rarely discussed. The indices themselves are not passive constructs. The S&amp;P 500 is not simply the five hundred largest American companies by market capitalisation. It is a curated list, managed by a committee at S&amp;P Dow Jones Indices that exercises judgement about which companies to include and exclude based on criteria like profitability, liquidity, and sector representation. When Tesla was added to the index in December 2020, every passive fund tracking the S&amp;P 500 was forced to buy the stock at whatever price it happened to be trading which, at the time, was a valuation that priced in extraordinary future growth. The &#8220;passive&#8221; investor had no say in this. They were, in effect, delegating active decisions to an index committee and calling the result passive.</p><p>This is where the comfortable narrative of passive investing begins to show some distress. The passive investing movement has been so successful, both intellectually and in terms of asset flows, that its risks have become systematically underappreciated. And risks that are not acknowledged are, by their nature, more dangerous than those that are.</p><p>The most fundamental issue is valuation indifference. An index fund buys a prescribed basket of securities at whatever price the market offers. It does not ask whether those prices are reasonable. It does not distinguish between a market trading at twelve times earnings and one trading at thirty times earnings. It simply allocates. This is the precise mechanism that makes indexing efficient and low-cost, but it is also the mechanism that exposes the passive investor to whatever excesses the market has embedded in current prices. If sentiment drives valuations to unsustainable levels, the passive investor is a full participant in driving that mispricing and a full participant in the correction that follows.</p><p>There is also a structural paradox at the heart of the passive thesis that economists Sanford Grossman and Joseph Stiglitz identified decades ago. Passive investing is a free-rider strategy. It works precisely because active investors are doing the costly, difficult work of analysing companies, discovering information, and setting prices. If everyone indexed, there would be no one left to perform this function, and prices would cease to reflect fundamental value. The more capital that flows into passive vehicles, the fewer participants are engaged in price discovery, and the less efficient the market becomes. We are not at that breaking point yet, but the direction is clear with passive funds now accounting for nearly 60% of US equity fund assets, up from just 6% three decades ago. Even Jack Bogle, the father of indexing, expressed concern before his death in 2019 that it would not be long before index funds owned half of all US stock, a level of concentration he did not consider healthy for markets. The implications for price discovery, market structure, and the very efficiency that passive investing depends upon are not fully understood.</p><p>This connects to a related concern about reflexivity. In a market-cap-weighted index, the largest companies receive the largest allocation. As passive inflows grow, more capital is mechanically directed towards the stocks that are already the most expensive. Success brings size, size begets passive inflows, and passive inflows causes further price appreciation which becomes a positive feedback loop that can amplify momentum and stretch valuations beyond what fundamentals alone would justify. The extraordinary concentration of today&#8217;s major indices, where a handful of technology mega-caps account for a disproportionate share of total index weight, is at least partly a consequence of this dynamic.</p><p>There is a reason hedge funds still have clients. It is not because their investors are unsophisticated or unaware of the data on active management as the SPIVA numbers are public and unambiguous, and nearly 64% of domestic stock funds were shuttered or merged over the past twenty years. It is because some allocators recognise that buying assets without reference to their intrinsic value carries its own set of risks, and that in certain environments like prolonged overvaluation, structural changes, and concentrated indices, those risks can materialise in ways that a backward-looking analysis of index returns does not capture. The Japanese investor who bought the Nikkei in 1989 waited over three decades to recover their nominal investment. The past performance of one index in one country over one period is not a universal guarantee.</p><p>This leads to the most important caveat of all: past performance is not a reliable indicator of future results. This disclaimer appears on every fund factsheet for a reason. The extraordinary returns generated by US equities over the past four decades were driven by a specific set of conditions. Falling interest rates, globalisation, technological revolution, expanding multiples, that may or may not persist. The passive investor is implicitly betting that they will. That may prove correct, but it is a bet, not a certainty, and treating it as the latter is a form of naive risk consideration.</p><h2>There Are No Free Lunches</h2><p>The appeal of passive investing lies partly in the suggestion that it has somehow solved the fundamental problem of investing: that generating returns requires hard work and accepting risk. The narrative, taken to its logical extreme, implies that you can simply buy the market, forget about it, and wake up wealthy. But there are no free lunches in finance. If passive investing appears to offer one, it is because the risks are not immediately visible due to an extended period of economic and market prosperity. Concentration risk in a handful of mega-cap names, valuation risk in a market that has been priced highly, and risk in an era of structural change do not disappear just because you are &#8216;passive&#8217; investing. Now you may be more protected as companies rise or fall, you will automatically rotate into the new large players, but you still have risk here as you are reliant on the overall pie still growing.</p><p>None of this means indexing is a bad strategy. In practice, most people will do perfectly well with a diversified, low-cost index portfolio held over a long time horizon. The data supports this, and the behavioural benefits of a strategy that minimises the temptation to tinker are real and substantial. But &#8220;you will probably be fine&#8221; is a different statement from &#8220;there is no risk,&#8221; and conflating the two is where the passive investing thesis crosses from sound advice into something closer to ideology.</p><p>The most useful framing, then, is not the binary of active versus passive, but a distinction between activity and analysis. You can be passive in your activity: trading infrequently, holding for the long term, avoiding the temptation to time markets, while remaining deeply active in your thinking. You can hold an index fund and still understand what you own, why you own it, what assumptions are embedded in current prices, and under what conditions those assumptions might fail. The investor who holds through a downturn because they have thought carefully about valuations, earnings trajectories, and historical drawdown recoveries is in a fundamentally different position from the investor who holds through a downturn because someone told them to never sell. Both may achieve the same outcome. But the first has a framework for understanding when the strategy might not work, and the second is relying on blind faith. Passive investing is, for most people, a prudent option to the question of what to do with their money. But it should not be the end of thinking about investing. It should be the beginning. The decision to index is a decision, and like all decisions, it deserves ongoing scrutiny, honest risk assessment, and intellectual engagement. You can choose not to act. You should never choose not to think.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-perils-of-passive-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-perils-of-passive-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-perils-of-passive-investing/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-perils-of-passive-investing/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Everyone Should Care About Capital Allocation]]></title><description><![CDATA[Run From It. Hide From It. It Is Inevitable.]]></description><link>https://www.bonnefinresearch.com/p/everyone-should-care-about-capital</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/everyone-should-care-about-capital</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Thu, 19 Feb 2026 04:00:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9b2e39e0-a947-4194-b450-562104dcb94a_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Run from it. Hide from it. You cannot escape it. Whether you like it or not, everybody is a capital allocator. Including you. But it doesn&#8217;t have to be all doom and gloom; rather, I think every human should embrace capital allocation because it is what shapes the material world we live in, for good or for bad.</p><p>I will define capital as any asset that is used in the production of goods and services, including physical, human, financial, and natural resources. In essence, capital is any asset that confers value or benefit to its owners. Capital allocation is the deployment of that capital. In other words, how we use that capital.</p><p>Capital allocation and exchange are, at the most fundamental level, how we as a society function in material terms. Every good and service that exists: every hospital, every road, every school, every piece of technology, is the product of capital being directed somewhere by someone. The entire material form of civilisation is built and maintained through the continuous process of deciding where resources should go.</p><p>Regardless of one&#8217;s beliefs on economic systems, about what should be done regarding taxation, wealth redistribution, wealth inequality, the role of government, or the boundaries of free enterprise, all of these debates are inherently debates about capital allocation. A tax policy is a capital allocation decision. A welfare program is a capital allocation decision. A tariff, a subsidy, a central bank interest rate, and a defence budget are all instances of capital allocation. The disagreements are not about whether capital should be allocated, but about who should allocate it, by what mechanism, how much, and toward what ends.</p><p>Understanding this does not require adopting any particular political stance. It simply requires recognising that the question of how resources are directed is the central organising question of any economy and any society. Those who understand this, who study how capital flows, who pay attention to where it is deployed effectively and where it is wasted, who grasp the incentive structures that shape these decisions, possess a clearer picture of how the material world actually works than those who do not.</p><p>Everyone is a capital allocator. Sam Walton, the founder of Walmart, built one of the largest enterprises in history around a deceptively simple principle: the customer is the boss. Every dollar a customer spends is a decision about where capital should flow. When millions and billions of people independently make these decisions every day, they collectively direct enormous rivers of capital toward certain companies, industries, and outcomes, and away from others. In this sense, capital allocation is not a remote activity confined to boardrooms and investment committees. It is something every person participates in, whether they recognise it or not.</p><p>Some of these allocation decisions are mundane, like what to buy for dinner, which subscription to keep, and whether to repair an appliance or replace it. Others are consequential, like how much to save, where to invest, whether to pursue further education, when to change careers, and whether to start a business. Each of these choices involves deploying limited resources (time, money, energy, attention) toward one use at the expense of another. That is capital allocation in its purest form. What usually separates those who build lasting capital wealth from those who don&#8217;t is often not just income, but how effectively they allocate the capital that passes through their hands. The mathematics of compounding plays a clear role in this. A return of ten percent per annum does not sound dramatic in any single year. But sustained over decades, it transforms modest sums into substantial wealth due to the exponential nature of compounding. A person who invests wisely and consistently, who understands the principles of compounding and resists the urge to consume every dollar earned, builds an enormous advantage over time. On the other hand, poor capital allocation, like excessive consumption, high-interest debt, neglected savings, and uninformed investment decisions, compounds in the opposite direction. The gap between a good allocator and a poor one widens with every passing year, not because of any single brilliant or terrible decision, but because of the relentless mathematics of compounding applied to thousands of small decisions over a lifetime.</p><p>There are varying degrees of desired and needed wealth to satisfy different people, and most people go their entire lives without truly figuring out what is they truly want and how much material wealth is involved in that. But on an overall, I think it is safe to say, until the point it corrupts and ruins, capital and wealth serve as useful means to increase our chances of living the lives we desire as individuals and as collectives.</p><p>In the modern context of capital allocation, one particular source of passion and interest, among a sea of many, is private enterprise. In the modern world, companies are one of the primary vehicles through which capital is allocated. They are the current engines that convert raw materials into products, ideas into services, and labour into living standards. Understanding how they operate, how they compete, how they succeed and fail, helps to understand the machinery of material progress itself. It is overwhelmingly private enterprise that drives the change, innovation, and maintenance of the physical world we interact with every day, and even many government services are outsourced to companies (depending on where you are located).</p><p>Investing in companies, then, is not some detached financial exercise. It is a direct form of capital allocation. A decision to place resources behind a particular set of people, ideas, and systems, with the expectation that they will create value over time. When an investor buys a share, they are directly or indirectly expressing a type of view on how effectively that enterprise converts capital into outcomes. Done well, this process channels resources toward competent management, sound strategy, and genuine innovation. Done poorly, it sustains inefficiency and misallocates wealth that could have been deployed more productively elsewhere. The investor, whether they think of it this way or not, is participating in the same allocative process that shapes the broader economy. As the world moves forward, companies will likely continue to play a significant role in shaping not only investment returns, but the very living standards, habits, and lifestyles of people across the globe. This serves simply as an argument for why everyone, not just professional investors, should pay attention to private enterprise and the larger capital allocation decisions at large.</p><p>Consider the smartphone as an example of the beauty in capital allocation. Long before a device reaches a customer&#8217;s hand, its existence begins in the extraction and processing of raw materials. Lithium is mined and refined for batteries. Copper is drawn for wiring and motors. Cobalt and nickel are sourced for energy density and stability. Rare earth elements are processed for magnets, speakers, and haptic systems. Silica is refined into ultra-pure silicon for semiconductors. These processes are capital-intensive, energy-hungry, and geographically dispersed, often spanning multiple continents before a single component is produced. From there, specialised firms translate raw inputs into highly engineered parts. Semiconductor design companies such as Apple and Qualcomm architect chips at the transistor level, optimising for performance, power efficiency, and thermal constraints. These designs are then manufactured by foundries like TSMC or Samsung using extreme ultraviolet lithography, which is a process involving atomic-scale precision, multi-billion-dollar fabrication plants, and some of the most complex machinery ever built. Other suppliers independently produce displays, batteries, memory, sensors, and radio frequency components such as Samsung Display, LG Display, CATL, LG Energy Solution, SK Hynix, Micron, Sony, Broadcom, Qorvo, with each relying on its own deep technical expertise and supply network. A central integrator and brand, such as Apple, coordinates this entire system. It defines product architecture, industrial design, software, performance targets, and quality standards, while orchestrating final assembly through manufacturing partners like Foxconn or Pegatron. Tens of thousands of components, sourced from hundreds of suppliers, must arrive at the right place, at the right time, at the right quality, for a single device to be assembled at scale. Global logistics underpin every step. Shipping companies, ports, freight forwarders, and customs systems move materials from mines to refineries, from fabs to assembly plants, and from factories to retailers and consumers worldwide. Inventory management, demand forecasting, and just-in-time production reduce costs and delays but increase system-wide coordination requirements. Any disruption whether due to geopolitical tension, energy shortages, natural disasters can ripple through the entire chain. Even then, the phone is a fraction of itself without the invisible infrastructure that surrounds it. Telecommunications companies operate cellular towers, fibre-optic networks, and undersea cables that transmit data across continents and oceans. Cloud providers run vast data centres filled with servers, networking equipment, cooling systems, and backup power, enabling everything from messaging and navigation to video streaming and payments. Software developers, cybersecurity firms, and network equipment manufacturers all play roles in ensuring devices can communicate securely and reliably at global scale.</p><p>Energy is the common denominator throughout this system. Mines, refineries, fabs, factories, cargo ships, data centres, and wireless networks all depend on continuous electricity generation and fuel supply. Once in the user&#8217;s hand, the device still relies on power grids, charging infrastructure, battery chemistry, and software designed to manage energy consumption efficiently. Even a single tap on a screen activates a chain of energy-consuming processes across servers, networks, and devices in multiple countries. And this is all before we even begin discussing the wide range of complex interactions that create the software and applications that actually make using a phone such a valuable experience, one that has changed the fabric of society and the way we as individuals live and perceive the world. Nor have we touched on the technical processes and innovations these companies invent and utilise to make it all possible.</p><p>Just a few hundred years ago, paper and pen were scarce and largely privileged goods, a constraint that was a primary driver of widespread illiteracy. The advent of the printing press and the mass supply of reading and writing materials radically lowered the cost of knowledge, enabling education at scale. This, in turn, empowered broad segments of society, raised living standards, expanded invention, and allowed human talent to be more fully realised. The distance we have travelled since then is extraordinary. Just look at how data and communication function now compared to then, and much of this progress is undeniably attributable to private enterprise, operating alongside supportive public infrastructure and institutions.</p><p>What ultimately appears to the end user as a simple exchange, a portion of their earnings traded for a slim piece of glass and metal, is the final output of one of the most complex coordination problems humanity has ever solved. And flowing through every one of these seemingly countless touchpoints is money and the movement of capital. Studying capital allocation, then, is not merely about profits and balance sheets, although it certainly can be the focal point if you want. It is also about understanding the systems that allocate capital, organise labour, deploy energy, and transform ideas into the everyday tools that modern life quietly depends on and often takes for granted.</p><p>This is how the modern supply chain for an object like a smartphone looks today. Companies will change, systems will evolve, and the balance between private enterprise and public ownership will shift in ways we cannot fully anticipate. But what will not change is something more fundamental: that human progress is driven by the exchange of resources, the transformation of those resources into something more valuable, and the further exchange of that value for capital that can then be deployed as the holder sees fit. The actors, the technologies, and the institutions may look entirely different a century from now. The underlying logic will not. While most of these ideas in this article have been written through the perspective of a capitalistic society, as nearly all modern societies operate to varying degrees in this manner, the truth is you can take these underlying principles and apply them to any economic system and find the same ideas and applications.</p><p>Every decision, every event, every piece of human history can be viewed through the lens of capital allocation to some degree. Don&#8217;t get me wrong, it&#8217;s not the be all and end all. There are many lenses to view the history of the world, such as the pursuit of happiness, the dream of meaning, the hunt for greatness, the manipulation of energy, the chase for power, the fate of incentives, and so forth, but capital allocation is among this exclusive few that are truly able to explain so much of the phenomena of human history.</p><p>The world will continue to change, shaped by the decisions of governments, companies and individuals allocating capital in small and large ways. Some of these decisions will prove wise, others wasteful, whether on an individual or collective wealth level. But taken together, they will determine the trajectory of living standards, the pace of innovation, and the quality of daily life for billions of people. Everyone should care about capital allocation because, whether they know it or not, everyone is already doing it.</p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/everyone-should-care-about-capital?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/everyone-should-care-about-capital?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/everyone-should-care-about-capital/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/everyone-should-care-about-capital/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Humanoid Robotics Boom]]></title><description><![CDATA[Opportunities and Threats as the Industry Grows]]></description><link>https://www.bonnefinresearch.com/p/the-humanoid-robotics-boom-part-i</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/the-humanoid-robotics-boom-part-i</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Tue, 17 Feb 2026 23:01:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ebce2bc6-456a-4409-a83c-06e6b22e88c0_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Humanoid Robotics Industry</h2><p>For decades, industrial automation has been dominated by stationary robotic systems designed for high precision and repeatability within tightly controlled, task-specific workflows. Today, some companies are attempting to shift the industry towards mobile and general-purpose robotic machines, driven by the desire for greater flexibility across less structured environments. The form that has been receiving the most attention as of late is the humanoid (a non-human entity with human form or characteristics). Some of this fascination is certainly due to dreams of recreating something similar in function and form to ourselves, but there is straightforward logic from an economic and productivity front. The built world was designed for human bodies. Doorways, staircases, tools, workstations, vehicles, warehouses, all of which are optimised for a bipedal form (using only two legs for walking) roughly 1.5 to 1.9 metres tall with two arms and two hands. A robot that can operate in these environments without modification to the environment itself unlocks an addressable market orders of magnitude larger than traditional industrial automation.</p><p>The convergence of several enabling technologies such as large language models for natural language instruction, transformer-based vision systems, improved battery energy density, and cheaper high-torque actuators (component of a machine responsible for moving or controlling a system by converting energy into physical motion), has made humanoid robots technically plausible for the first time. The question is no longer whether they can be built, but whether they can be built at a cost and reliability level that justifies commercial deployment. That question remains open.</p><p>Research firms are producing enormous numbers and they disagree wildly. Projections for the global humanoid robot market by the mid-2030s range from as little as US$4 billion (Grand View Research) to US$182 billion (Future Market Insights), a gap of roughly 45x. MarketsandMarkets forecasts growth from US$2.92 billion in 2025 to US$15.26 billion by 2030, while other estimates are far more aggressive, reaching US$38 billion by 2035. At the conservative extreme, DIGITIMES Research sees humanoid robots at just 0.2% of the global robot market in 2025, rising to 2% by 2030. Morgan Stanley anticipates slow adoption until the mid-2030s, accelerating only in the late 2030s and 2040s. The difference between the floor and ceiling is the difference between a niche industrial subsegment and a civilisation-altering technology. The range should indicate to investors a clear situation: in this field, nobody really knows anything for certain. The most basic commercial questions  of who will buy these robots, at what price, what tasks can they perform and what will be their functional capabilities all remain unanswered. As a result, we believe that investors must be prudent and grounded, while also ensuring they embrace the reality of progress and not hide from it.</p><p>Global humanoid robot installations reached an estimated 16,000 units in 2025 and the market is currently dominated by Chinese manufacturers as 4 of the top 5 players in global market share (AgiBot, Unitree, UBTECH, Leju Robotics) come from China. These players and more Chinese firms are shipping in volume at price points between US$5,000 and US$100,000 that Western competitors struggle to match. The cost advantage is structural, underpinned by lower labour costs, government subsidies, and vertically integrated domestic supply chains for actuators, batteries, and sensors. However, before we get ahead of ourselves, it&#8217;s important to note that these numbers really don&#8217;t mean much for now. Humanoids in the automotive sector, the first major deployment vertical, remain in early pilot testing, performing basic tasks like badge labelling, material handling, visual inspection. These are tasks that existing industrial robots already perform capably. The humanoid form factor&#8217;s genuine value proposition of operating in environments designed for humans, using human tools, interacting safely alongside human workers has not yet been demonstrated at commercial scale. In other words, the technology and use cases benefits are not genuine yet, but rather the question is can it be achieved in the future and if so, how much is it worth?</p><p>The current humanoid field extends beyond these volume leaders with Western companies also investing aggressively across the sector. Tesla&#8217;s Optimus program is transitioning from internal prototyping toward commercial production. On its Q4 2025 earnings call in January 2026, Tesla announced it would end Model S and Model X production by mid-2026 and repurpose the Fremont, California assembly lines for Optimus manufacturing, with a long-term target of one million units per year. The Gen 3 Optimus prototype is expected to be unveiled in Q1 2026, with start of production targeted before the end of 2026 and consumer sales aimed for late 2027. Tesla&#8217;s aspirational unit price remains US$20,000-30,000 at scale, though most analysts expect initial commercial pricing in the US$100,000-150,000 range. However, progress has been slower than Musk&#8217;s forecasts: on the same earnings call, he acknowledged that no Optimus robots were yet performing useful autonomous work at Tesla facilities, despite having predicted thousands would be operational by the end of 2025.</p><p>Figure AI has become one of the highest valuation pure-play humanoid companies, raising over US$1 billion in its September 2025 Series C at a US$39 billion valuation, bringing total funding past US$1.9 billion. It&#8217;s Figure 02 completed an 11-month deployment at BMW&#8217;s Spartanburg plant, running daily 10-hour shifts and loading over 90,000 parts. The company operates a Robot-as-a-Service model at approximately US$1,000 per robot per month and has ended its OpenAI collaboration, now building AI entirely in-house through its Helix platform.</p><p>Agility Robotics leads in commercial deployment where its bipedal Digit robot reached 100,000 totes moved at a GXO Logistics facility in late 2025, widely regarded as the first full-time commercial humanoid deployment. Agility closed a US$400 million Series C in March 2025 (total funding approximately US$641 million), with unit pricing estimated at US$250,000-plus alongside a RaaS subscription option. Its RoboFab facility in Oregon has capacity for 10,000 units per year.</p><p>Boston Dynamics unveiled the production version of its all-electric Atlas humanoid at CES 2026. Atlas is an enterprise-grade bipedal robot designed for industrial tasks such as material handling and order fulfilment, standing 1.9 metres tall with a 2.3-metre reach, 30 kg lifting capacity, four hours of battery life with autonomous self-swapping, and 56 degrees of freedom enabling movement beyond human range. All 2026 production is committed to Hyundai&#8217;s Robotics Metaplant Application Center and Google DeepMind, with a DeepMind partnership integrating Gemini Robotics foundation models for improved perception and autonomy. Hyundai (approximately 80 per cent owner) plans broader factory deployment by 2028 and is building a dedicated robotics facility with 30,000-unit annual capacity. Unit pricing has not been disclosed.</p><p>Sanctuary AI&#8217;s Phoenix, now in its eighth generation, is powered by Carbon, a proprietary AI system combining symbolic reasoning with large language models. Partnered with Magna International for automotive manufacturing, Phoenix has demonstrated industry-leading dexterity (21-DOF hydraulic hands, near-human tactile sensitivity), though at approximately US$140 million in total funding it remains significantly less capitalised than competitors. 1X Technologies (OpenAI-backed) opened pre-orders for its consumer-focused NEO humanoid in October 2025 at US$20,000 (or US$499/month), with first US deliveries planned for 2026.</p><p>The fragmentation of the OEM landscape is an important structural feature for supply chain investors. A fragmented market usually means no single OEM has the scale or incentive to vertically integrate every component, creating persistent demand for some specialist suppliers. Conversely, if the market consolidates around one or two vertically integrated players, the specialist component supplier opportunity narrows dramatically although may not disappear.</p><h3>Further Remarks</h3><p>Whether humanoid robotics can achieve the technical benchmarks required for mass deployment remains uncertain, but the underlying demand case and potential benefits are compelling if even some of those technical goals are met. The fundamental promise is one of expanding the effective labour supply and quality. Robotic workers can operate continuously without fatigue, scale predictably without recruitment constraints, maintain consistent output quality, and perform dangerous or physically demanding tasks without injury risk and in many applications, they would simply be more efficient and proficient than humans. This matters because the need is already acute. Potential labour shortages are structural and worsening across developed economies. Ageing populations in Japan, South Korea, Germany, and increasingly the United States are creating a gap between labour demand and supply that immigration alone cannot fill, and global labour costs continue to rise. Furthermore, the long-term view is that much like agricultural innovation throughout history, this is not simply a story of replacement. By absorbing repetitive, physically intensive, or hazardous work, robotics frees human workers to move into roles where humans hold a genuine comparative advantage and value, thereby creating more total wealth. Many of these roles probably do not exist yet and require the human imagination to unlock.</p><p>It is important to remember that automation adoption, along with nearly all technological adoption, has historically followed a predictable S-curve: slow initial uptake, then rapid acceleration once cost-per-task falls below the human wage equivalent.</p><p>Some catalysts that investors and keen observers should track include Tesla&#8217;s push toward mass production of its Optimus robot, expected around 2026&#8211;2027, is among the most visible, and success would validate demand, drive supply chain maturation, and set pricing benchmarks for the industry. But Optimus is one program among many with many of the aforementioned companies progressing on different timelines and with different technical approaches. The competitive landscape is broad, and investors should be cautious about equating the industry&#8217;s prospects with any single company&#8217;s execution until we start seeing consolidation. In other words, as with all industries at a nascent or reinventment stage, its bit of a crapshoot with a lot of luck involved and to maximise skill-based decision making, you must follow the real value propositions of companies and their execution in delivering to customers.</p><p>The cost curve is critical. The key inflection point arrives when unit costs fall far enough that a robot&#8217;s total cost of ownership (purchase price, maintenance, energy, and software) becomes cheaper over its operational life than the fully loaded cost of the human labour it replaces. In warehouse and manufacturing settings, where annual labour costs per worker often run between US$40,000 and US$70,000 including benefits and overheads, a robot that costs meaningfully less than a single year&#8217;s wages and operates for several years reaches a compelling payback threshold. Several factors are driving costs downward: battery cost curves continuing their decade-long decline, actuator and sensor commoditisation, supply chain standardisation, and software costs amortised across larger fleets.</p><p>AI capability continues to expand what these machines can do, but another bottleneck lies in integrating perception, planning, and fine motor control in real-world environments. Current systems still struggle with deformable objects, truly unstructured spaces, and graceful recovery from the unexpected. The gap between demo performance and production-grade reliability remains significant, and closing it is arguably the single most important variable in the deployment timeline. Furthermore, not all applications are equal. Deployment in a controlled warehouse with structured layout, predictable tasks, limited human interaction is seemingly the relatively near-term opportunity. Aged care, construction, and retail will require substantially more capable systems. The likely path is not a single moment of mass adoption but a rolling sequence of sector-by-sector deployment with different S-Curves and different players or products involved in each area.</p><p>Regulatory frameworks will be another big part as there may be unrest from unemployment effects if these robots lead to increased layoffs and a decrease in hiring. Beyond this, there will be further questions about the safety standards and liability clarity that commercial deployment at scale requires.</p><p>The bull case is that humanoid robots follow the trajectory of smartphones: expensive early devices giving way to exponential cost reduction and a mass market (although unlike smartphones, there may be more space for different types of humanoid robot companies and products to take different parts of the market). The bear case is that they remain in technical and/or commercial limbo: perpetually near but never quite arriving.</p><h2>Tesla as the Catalyst Driving the Sector (For Now)</h2><p>It goes without saying that from the mainstream perspective, it is Tesla and Elon Musk that currently capture the imagination of most, and it is Tesla&#8217;s Optimus that induces the most excitement and attention.</p><p>As mentioned before, Tesla is undergoing a transition at their Fremont factory, repurposing production lines from the Model X and S to produce Optimus Gen 3. While it will take a while for Optimus to become available for sale, with the current projection being 2027 (and there&#8217;s probably a good chance it will be later), the signal is clear: Tesla believes they have a viable commercial product in their hands. Tesla&#8217;s stated production targets are ambitious: 1 million units per year at Fremont (the theoretical maximum capacity of the repurposed lines), and some reports expect scaling to at least 4 million per year at Giga Texas in the coming years, with 4-10 million per year across multiple facilities by 2028-2030. Musk has said that roughly 80% of Tesla&#8217;s future value will come from Optimus and related AI businesses. Whether or not one accepts that figure, the capital being deployed is real for now.</p><h3>Optimus&#8217; Functionality and Timeline</h3><p>Gen 3 represents a meaningful technical step forward. Current demonstrated capabilities include walking at up to roughly 8 km/h, lifting and sorting small objects, scripted pick-and-place tasks in factory settings, autonomous self-charging, and improved navigation powered by Tesla&#8217;s Full Self-Driving neural network architecture. Over 1,000 Gen 2 and Gen 3 units are reportedly operating inside Tesla&#8217;s own factories, primarily on tasks like battery cell sorting and material handling.</p><p>The most significant hardware advance is in the hands, which now feature 22 degrees of freedom, which is double Gen 2&#8217;s 11. Degrees of freedom (DoF), in robotics, refers to the number of independent axes along which a joint or mechanism can move; a human hand is typically described as having 27, meaning each finger and the wrist can bend, rotate, and extend in multiple directions independently. Gen 3 achieves its 22 DoF by relocating the actuators (the motors that drive movement) from the hand itself into the forearm, connected via a tendon-driven system that mimics the way human forearm muscles control the fingers through tendons. This allows for a lighter, more dexterous hand capable of manipulating small and delicate objects. Tesla claims Gen 3 can perform over 3,000 discrete tasks, though this figure is unverified in unstructured real-world environments.</p><p>However, there are substantial limitations and challenges for Optimus. True autonomy in unstructured settings, the kind needed for reliable shift work in a factory, let alone household use, is unproven. At Tesla&#8217;s October 2024 &#8220;We, Robot&#8221; event, the company did not disclose that Optimus robots interacting with attendees were being remotely controlled by human operators, a fact later confirmed by then-program lead Milan Kovac. Balance on uneven terrain, manipulation of deformable or irregularly shaped objects, and sustained performance under real-world conditions (dust, moisture, impacts, thermal stress) all remain hard engineering problems. Battery life under continuous workloads is another open question. Roboticist Rodney Brooks, co-founder of iRobot, has described the vision of humanoid robots as general-purpose assistants as &#8220;pure fantasy thinking,&#8221; citing fundamental coordination challenges. Bridging the gap between a polished demo and an 8-hour shift is where most robotics programs stall. Tesla&#8217;s robotics (and automobile) ambitions have usually followed a pattern familiar to anyone who has tracked the company: directionally correct, unreliable on timing. Musk projected production-ready robots by 2023 at the original 2021 AI Day. In March 2025, he spoke of producing &#8220;at least one legion&#8221; (roughly 5,000 units) by year&#8217;s end, scaling to 50,000-100,000 in 2026. Actual 2025 output was only in the hundreds. Reporting from The Information revealed that Tesla hit major technical problems with the robot&#8217;s hands, leaving completed bodies sitting idle in factories awaiting hand and forearm components. Combine all of this with an increasingly competitive landscape, and Tesla is entering perhaps the most pivotal era of its robotics program.</p><h2>The Optimus Supply Chain</h2><p><em>*To be clear, much of the following is from reports and sources outside of official Tesla channels.</em></p><p>Despite Tesla&#8217;s aggressive vertical integration philosophy, Optimus relies heavily on a network of third-party suppliers. Musk acknowledged on the Q4 2025 earnings call that the robot uses &#8220;a completely new supply chain&#8221; with &#8220;really nothing from the existing supply chain&#8221; of Tesla&#8217;s vehicle business. He further noted that Tesla had &#8220;tried desperately&#8221; to use existing motors, actuators and sensors but &#8220;nothing worked for a human robot hand at any price,&#8221; forcing the company to &#8220;design everything from physics first principles.&#8221; This means Tesla cannot leverage its established automotive supplier relationships and must build new partnerships largely from scratch, a process Musk warned would produce a &#8220;longer and slower manufacturing S-curve&#8221; than products sharing components with existing lines.</p><p>One of the most striking features of the reported bill of materials for Optimus is the dominance of actuators. Linear and rotary actuators together account for an estimated 40-56% of total component cost, depending on the analysis. Morgan Stanley&#8217;s work places the figure at the higher end, while other Chinese brokerage research suggests a range closer to 44&#8211;45%. Sanhua Intelligent Control is widely reported to be the primary supplier of actuator assemblies for Optimus. In October 2025, Chinese media reported a 5 billion yuan (approximately US$685 million) order for linear actuators which is enough, by industry estimates, for roughly 180,000 units. Neither party confirmed it: Sanhua stated it had no material information to disclose, while Tesla China said there was no official information to share externally. A further 1.2 billion yuan order was reported in December 2025, estimated to cover around 43,000 additional units. The supplier landscape, however, has become less clear-cut over time. Initial Chinese brokerage research said Sanhua was the exclusive linear actuator supplier and Tuopu Group was the exclusive rotary actuator supplier. But the December reporting attributed supply of both linear actuators and all 14 rotary joints per unit to Sanhua, suggesting the changing dynamics or unreliable reporting. Tuopu, which began mass-producing self-developed rotary actuators in 2024 and has committed 5 billion yuan to a dedicated robotics production base in Ningbo, continues to be identified as a core Tier 1 supplier. Whether a genuine dual-supply arrangement is emerging or Sanhua is consolidating its position or there are other suppliers involved remains an open question. Both companies are established Tesla automotive suppliers who pivoted aggressively into robotics, and both saw significant share price moves on the back of Optimus-related reporting.</p><p>Beyond actuators, the supply chain includes TSMC and Samsung for AI chip fabrication (Tesla designs the chips in-house), CATL for high-density battery cells, Green Harmonics for harmonic reducers (a component critical for precise joint movement, where Green Harmonics holds over 60% of the domestic Chinese market, breaking the previous monopoly of Japan&#8217;s Harmonic Drive Systems) and Keli Sensing for six-dimensional torque sensors benchmarked against industry leader ATI. Keli remains in small-batch verification rather than full production supply. Tesla designs the dexterous hands in-house, and the FSD/AI chip at roughly a quarter of BoM cost (approximately 50,000 yuan per unit, or around 26.5% of total cost according to Chinese brokerage estimates) is Tesla-designed but externally fabricated.</p><p>The supply chain is overwhelmingly Chinese and Taiwanese. With the exception of Tesla&#8217;s in-house AI chip design and its US assembly operations, nearly every major Optimus component supplier is based in China or Taiwan. This creates meaningful geopolitical exposure and is a risk that has already materialised. In April 2025, China&#8217;s Commerce Ministry imposed export controls on seven rare earth elements and magnets, in retaliation for US tariffs. Musk confirmed on the Q1 2025 earnings call that Optimus production was being impacted by what he called the &#8220;magnet issue,&#8221; noting that China was requiring assurances that the rare earth magnets would not be used for military purposes. Tesla was working with Chinese authorities to secure an export licence, but the licensing process was described as opaque and potentially taking weeks to months. Public trade data compiled by supply chain intelligence firm Sayari indicates that Tesla appears to have sourced all of its neodymium-iron-boron (NdFeB) magnets from Chinese suppliers and there are currently no comparable non-Chinese suppliers for several key components at the required scale and cost. Supply chain diversification is a multi-year endeavour which means that for investors evaluating Optimus-related suppliers, there is both opportunity and risk due to geography.</p><p>The supplier relationships with Sanhua and Tuopu give those companies significant revenue visibility and near-term pricing leverage. However, they also carry the long-term risk that Tesla eventually brings actuator manufacturing in-house as volumes justify the capital investment or to find other alternative suppliers. This is a pattern well-established in Tesla&#8217;s automotive business with items like battery cells and computing chips. Sanhua&#8217;s own financial disclosures suggest awareness of this dynamic: the company has been working to reduce its Tesla revenue concentration from 35% to 28%, diversifying its customer base to avoid single-client dependence.</p><h2>Supply Chain Hunting</h2><p>The classic picks-and-shovels argument is that when a new technology wave emerges, it is often more profitable to invest in the companies supplying critical components than in the OEMs building the end product. The logic is that OEMs face intense competition, margin pressure, and execution risk, while component suppliers with proprietary technology can sell to multiple OEMs and benefit from the growth of the entire ecosystem regardless of which individual OEM wins. In humanoid robotics, this logic may be relevant. Currently, the OEM landscape is fragmented, competitive, and unprofitable. None of the humanoid robotics companies are generating meaningful revenue yet, and most are burning cash at venture-backed rates. Picking the OEM winner is nearly impossible. But every humanoid robot, regardless of manufacturer, requires the same categories of components: actuators, batteries, sensors, harmonic reducers, AI chips, and structural elements.</p><h3>The Sensor Opportunity</h3><p>Every humanoid robot requires extensive sensing: force/torque sensors at each joint for compliant motion control, tactile sensors in the hands for manipulation feedback, inertial measuring units (IMUs) for balance, encoders for joint position, and vision systems for navigation. The sensor stack is where the robot&#8217;s physical intelligence resides and without it, the robot is a blind, clumsy machine regardless of how sophisticated its AI model is. To understand why sensors matter so profoundly, consider that they are what close the loop between the robot&#8217;s AI brain and the physical world. Without them, even the most sophisticated neural network is essentially issuing commands into a void. Something as seemingly simple as picking up a coffee cup requires continuous real-time feedback at every stage of execution. Force/torque sensors in the wrist and fingers detect the moment of contact and modulate grip pressure as too little and the cup slips, too much and it shatters. The robot has no inherent sense of feel the way humans do; that entire channel of physical awareness must be synthetically recreated through sensor data. Without force feedback, the robot is guessing, and guessing doesn&#8217;t work when you&#8217;re handling eggs, assisting an elderly person, or inserting a component on an assembly line.</p><p>Balance presents another dimension entirely. Humans maintain upright posture through an extraordinarily complex feedback loop involving the inner ear, signals from muscles and joints, and visual cues and all processed subconsciously at remarkable speed. A bipedal humanoid must replicate this with IMUs measuring orientation and angular velocity, joint encoders tracking limb positions, and force sensors in the feet detecting ground contact and weight distribution. The robot&#8217;s balance controller is constantly making micro-adjustments based on this sensor fusion, hundreds of times per second. Remove any one input and the system degrades rapidly. For instance, a robot without foot pressure sensing cannot reliably detect uneven terrain or anticipate a stumble.</p><p>Then there is the navigation and spatial awareness layer. Vision systems and depth sensors allow the robot to map its environment, identify objects, avoid obstacles, and understand context. But vision alone is not sufficient for manipulation as you can see a door handle, but you cannot feel whether it is locked until you apply torque and sense resistance. The sensor types are complementary rather than redundant: vision tells the robot what and where; force and tactile sensing tell it how much and how carefully.</p><p>What makes this especially critical for the humanoid form is that these robots are designed to hopefully be able to operate in unstructured human environments like homes, warehouses, hospitals, where nothing is precisely positioned and conditions change constantly. An industrial robot arm on a factory floor can get away with less sensing because its environment is controlled and predictable. A humanoid walking through a cluttered room, picking up varied objects, and interacting safely with people needs rich, multi-modal sensory input just to function at a basic level. The AI provides reasoning and planning, but the sensors provide embodiment, the ability to exist in and interact with the physical world rather than merely issuing abstract motor commands. Every improvement in sensor quality, speed, and coverage translates directly into more capable, safer, and more adaptable robots.</p><p>The market numbers (there is a large range and many are speculative), for what they are worth, reflect this perceived importance. Force and torque sensors held approximately 28 per cent of the robotic sensor market in 2024, with strain-gauge technology accounting for 34 per cent of total robotic sensor technology by sensing method. The broader 6-axis force torque sensor market was valued at roughly US$225 million in 2023 and is projected to reach approximately US$2.3 billion by 2030, implying a CAGR of around 40.5 per cent. Within the humanoid-specific segment, the torque sensor market stood at US$515 million in 2024 and is forecast to reach US$7.87 billion by 2032 at a CAGR of roughly 48 per cent, while humanoid robotic sensors more broadly are projected to grow at a 38.5 per cent CAGR through 2030. If these projections are even directionally correct, the sensor market within humanoid robotics will grow from hundreds of millions to billions of dollars within a decade. The question for investors is which sensor companies will capture this value, at what margins, and with what competitive durability.</p><h3>A Historical Examination</h3><p>Before committing capital to any supply chain thesis in humanoid robotics, it&#8217;s worth examining one of the more recent and arguably most relevant historical parallels: the smartphone revolution&#8217;s component ecosystem. The smartphone wave created enormous aggregate value, but that value was distributed unevenly, and often counterintuitively, across the supply chain. Understanding why certain suppliers captured outsized long-term returns while others were marginalised despite genuine technological contributions offers a framework for evaluating opportunities in robotics.</p><p>ARM Holdings captured value across the entire smartphone ecosystem through its architecture licensing model, collecting royalties on virtually every application processor shipped regardless of which OEM won market share in any given quarter. Its position was defensible not merely because the instruction set architecture was technically excellent, but because an entire software ecosystem (operating systems, compilers, developer tools, application libraries) had been built around it over decades. Displacing ARM didn&#8217;t require designing a better chip; it required rebuilding an ecosystem, which is a categorically different and vastly more difficult undertaking. ARM&#8217;s moat was architectural and systemic, not merely technical. Its licensing model meant that each incremental smartphone sold generated royalty revenue at near-zero marginal cost, and its customer base spanned Apple, Samsung, Qualcomm, MediaTek, and dozens of smaller licensees. No single customer&#8217;s trajectory entirely determined ARM&#8217;s fortunes.</p><p>TSMC became the indispensable manufacturing platform powering every major fabless chip designer. Its defensibility derived from the compounding nature of leading-edge semiconductor fabrication: each successive process node demanded billions in capital expenditure and years of yield optimisation that couldn&#8217;t be shortcut by competitors willing to simply spend more. The knowledge embedded in TSMC&#8217;s manufacturing processes was cumulative and deeply tacit. Its economics scaled powerfully with volume because fab utilisation is the primary driver of semiconductor manufacturing profitability; more wafers processed across a fixed capital base meant expanding margins. And TSMC served Apple, Nvidia, AMD, Qualcomm, Broadcom, and a long tail of smaller customers.</p><p>Qualcomm dominated mobile baseband processors through a combination of deep RF engineering expertise and an enormous patent portfolio embedded in 3G and 4G cellular standards. Replicating Qualcomm&#8217;s position required not just matching its silicon design capabilities but navigating a load of essential patents that made independent entry legally prohibitive. Its Snapdragon platform scaled with global smartphone volumes, particularly as emerging markets drove unit growth in the mid-2010s. Qualcomm&#8217;s customer diversification was real but imperfect as it maintained meaningful revenue across Chinese OEMs like Xiaomi, Oppo, and Vivo, but its relationships with Apple and Samsung were periodically contentious and subject to legal disputes that introduced genuine earnings volatility.</p><p>There were definitely other factors at play, like counter positioning in business model in TSMC&#8217;s case, but the general framework is that winners possessed three traits: technology that couldn&#8217;t be easily replicated or brought in-house, a business model that scaled with volume rather than being diluted by it, and customer diversification that avoided existential dependency on a single OEM.</p><p>On the other hand, many component suppliers that were genuinely critical during the early smartphone era ultimately failed to deliver outsized long-term investment returns and not because their technology was unimportant, but because the structural characteristics of their competitive positions made them vulnerable to the very market growth that was supposed to be their tailwind. Audience developed noise suppression technology designed into Apple and Samsung flagships, but its function sat within a processing pipeline controlled by the chipmaker. When Qualcomm and MediaTek integrated good enough noise suppression directly into their chips, the need for a standalone solution disappeared. InvenSense pioneered MEMS gyroscopes with design wins at Apple, Samsung, and Nintendo, but motion sensing sat within a framework increasingly managed by the same chip designers who bought InvenSense&#8217;s components, meaning its customer was also its most dangerous competitor. Peregrine Semiconductor produced high-performance RF switches designed into handsets across multiple OEMs, but its function was one element within a module whose integration was driven by larger players like Skyworks and Qorvo who could absorb it into broader solutions. In an ironic twist that demonstrates the recurring nature of these aforementioned patterns, Skyworks and Qorvo, the very companies that absorbed discrete players like Peregrine, announced a merger in 2025 after coming under pressure from the same integration logic they once exploited, as Qualcomm bundled RF into its modem platform and Broadcom captured premium sockets through superior filter technology.</p><p>As mentioned before, the obvious framework is that winners possessed three traits: technology that couldn&#8217;t be easily replicated or brought in-house, a business model that scaled with volume rather than being diluted by it, and customer diversification that avoided existential dependency on a single OEM. These criteria are sound as a first filter but insufficient as a complete explanation. Several additional conditions distinguished companies that captured compounding long-term value from those that captured transient technological rents.</p><p>Position at an architectural chokepoint versus within a subsystem<em>.</em> The winners defined entire layers of the technology stack; ARM owned compute architecture, TSMC controlled the manufacturing platform, Qualcomm dominated connectivity. The losers occupied functional slots within subsystems ultimately defined and controlled by others. The critical distinction is between defining a layer of the stack and populating a slot within someone else&#8217;s layer. Companies in the latter position are inherently vulnerable to integration by the layer owner, who has both the incentive and the architectural vantage point to absorb adjacent functionality over time.</p><p>Ecosystem lock-in versus pure technical superiority. ARM&#8217;s advantage wasn&#8217;t simply that its architecture was hard to replicate technically; it was that switching costs were borne by the entire downstream software ecosystem, creating a coordination problem no single actor could resolve. The losers had advantages rooted in pure technical performance like better signal-to-noise ratios, more accurate motion sensing, lower insertion loss, which exist on a continuous spectrum where good enough alternatives can emerge. Ecosystem lock-in creates a discontinuous barrier where partial replication captures zero value.</p><p>Compounding versus static defensibility<em>.</em> TSMC&#8217;s moat widened with each process node. ARM&#8217;s ecosystem became more entrenched with each line of software compiled for its instruction set. Qualcomm&#8217;s patent portfolio expanded with each new cellular standard. The losers had defensibility that was static at best and eroding at worst as their leads were real at a point in time but didn&#8217;t compound. Being ahead technically is valuable; being ahead in a way that makes you more<em> </em>ahead next year is transformative.</p><p>Pricing power durability under volume growth<em>.</em> ARM&#8217;s royalty model captured a percentage of chip value regardless of ASP trends. TSMC&#8217;s pricing power increased<em> </em>at advanced nodes as fewer foundries could follow. The losers faced annual price-down negotiations of five to fifteen per cent that eroded margins even as volumes grew. The relevant question is not &#8220;does revenue grow with industry volume?&#8221; but &#8220;does the supplier&#8217;s economic share of each unit remain stable or expand as the market matures?&#8221;</p><p>Standard-setting influence<em>.</em> The winners defined or heavily influenced the standards around which the industry organised. Standard-setting creates a self-reinforcing dynamic where the industry&#8217;s technical direction inherently benefits the standard-setter. Suppliers that merely conform to standards set by others lack this structural tailwind and are perpetually adapting to specifications whose evolution may not serve their interests.</p><p>Being a critical component supplier in a new technology wave is necessary but not sufficient for outsized investment returns. The smartphone parallel suggests investors should evaluate any picks-and-shovels opportunity against a more demanding set of criteria than mere technological importance: whether the company sits at an architectural chokepoint or within someone else&#8217;s subsystem; whether its defensibility compounds over time or erodes as integration advances; whether its pricing power survives OEM negotiating leverage as volumes scale; whether it benefits from ecosystem lock-in that creates discontinuous switching costs; and whether it influences the standards around which the industry organises. Companies that satisfy most of these conditions tend to capture a disproportionate share of ecosystem value. Companies that satisfy only the first-order criterion of technological relevance tend to be acquired, compressed, or integrated away before the market&#8217;s full value is realised. In other words, to put it simply, what is the moat? In order to earn outsized returns, a company has to provide a valuable product but then it also has to have a moat to maintain its superiority in delivering better products, as well-resourced and well-motivated players will move to take their margin if they can replicate or replace them.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-humanoid-robotics-boom-part-i?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-humanoid-robotics-boom-part-i?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-humanoid-robotics-boom-part-i/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-humanoid-robotics-boom-part-i/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Anthropic's $380 Billion Valuation]]></title><description><![CDATA[$30 Billion Raised and the State of the Models]]></description><link>https://www.bonnefinresearch.com/p/notes-005-anthropics-380-billion</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/notes-005-anthropics-380-billion</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Sun, 15 Feb 2026 04:00:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1cf0bec0-b6df-46c4-a570-5009bbfaf47b_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On 12 February 2026, Anthropic announced the close of a US$30 billion Series G funding round, valuing the company at US$380 billion post-money. It was the second-largest private technology financing round in history, trailing only OpenAI&#8217;s raise of over US$40 billion the previous year. Led by Singapore&#8217;s sovereign wealth fund GIC and Coatue, with additional capital from Founders Fund, D.E. Shaw and many more. The deal further demonstrated the simple idea that frontier large language model companies are no longer start-ups in any conventional sense. They are the fastest-growing enterprises the technology industry has ever produced, combined with what some perceive to be the most important technology in human history, and the capital markets are pricing them accordingly.</p><p>To appreciate the scale, consider the broader landscape. OpenAI was valued at US$500 billion in an October 2025 secondary share sale and is reportedly seeking a further US$100 billion round that would push its valuation towards US$830 billion. xAI was valued at US$250 billion when it merged with SpaceX on 2 February 2026 (albeit this is more of an internal valuation), creating a combined entity worth US$1.25 trillion ahead of a planned IPO. Google&#8217;s parent Alphabet, the only publicly traded peer operating at the frontier (Meta&#8217;s Llama currently lags in most performance benchmarks right now), carries a market capitalisation approaching US$4 trillion, and has committed up to US$185 billion in capital expenditure for 2026 to defend its position. These numbers are staggering, and they are also, in the eyes of many investors, entirely rational bets based on what the AI industry promises to deliver.</p><h2>LLMs as the Measure of Progress</h2><p>There is a lively, even heated, debate within the AI research community about whether large language models represent the path to artificial general intelligence or merely a useful waypoint, some even think it&#8217;s a dead end. Some researchers argue that the current architecture, for all its remarkable successes, lacks the grounding, reasoning depth, and embodied understanding needed to reach genuine machine intelligence. Others contend that scale, data, and clever engineering may be sufficient to close these gaps, especially as models are augmented with tool use, memory, and agentic capabilities. Regardless of where one lands on this question, the practical reality in early 2026 is that LLMs are the dominant proxy for AI progress. They are the products generating tens of billions of dollars in revenue. They are what enterprises are deploying across their operations. They are what governments are subsidising through infrastructure programmes. And they are what investors are using to determine the current pecking order of the AI industry. By most measures, OpenAI, Google DeepMind, and Anthropic are widely recognised as the frontier leaders, with many ambitious challengers like xAI, Meta, DeepSeek, Alibaba, and a vibrant ecosystem of more companies with most being concentrated in the US and China. Benchmark performances (which many think are flawed to begin with) among the leading main flagship models has converged significantly over the past twelve months. On widely tracked evaluations, the gap between Claude Opus 4.6, GPT-5.2, and Gemini 3 Pro has narrowed to the point where leaderboard positioning shifts with each new release. The competitive picture is fluid and increasingly difficult to summarise with a single ranking, especially in such an ambitious and undefined field. What is<em> </em>clear is that no single lab has established a durable lead, and each new model generation resets the competition. Whether large language models ultimately prove central to the long-term pursuit of artificial general intelligence and beyond, or merely a stepping stone superseded by breakthroughs in spatial AI, neuro-symbolic reasoning, novel architectures, or something not yet imagined, is genuinely unknown, but in the current landscape, they are the yardstick by which the industry measures itself.</p><h2>Financial Growth at Unprecedented Scale</h2><p>The financial trajectories of the leading AI companies are, by any historical standard, extraordinary. OpenAI reported annualised recurring revenue of US$20 billion at the end of 2025, up from US$6 billion in 2024 and US$2 billion in 2023 which is a tenfold increase over two years. In its blog post disclosing the figure, OpenAI&#8217;s CFO Sarah Friar drew a direct line between revenue growth and compute expansion, noting that the company&#8217;s available compute had grown from 0.2 gigawatts in 2023 to 1.9 gigawatts in 2025. Anthropic has tracked a remarkably similar curve. The company disclosed alongside its Series G that its run-rate revenue had reached US$14 billion, growing more than tenfold annually over the past three years. At the start of 2025, Anthropic&#8217;s run rate was approximately US$1 billion. By August it had surpassed US$5 billion. The number of customers spending more than US$100,000 annually has grown sevenfold in the past year, more than 500 now spend over US$1 million. Google does not disclose Gemini-specific revenue but what is telling is the sheer scale of its investment (as mentioned before), with up to US$185 billion earmarked for capital expenditure in 2026, Alphabet is outspending every other AI participant by a wide margin, a reflection of both its existing infrastructure and its determination not to cede the frontier.</p><h2>Claude Code, Cowork and the SaaSpocalypse</h2><p>If any single product has defined Anthropic&#8217;s ascent in early 2026, it is Claude Code. Launched in May 2025, the AI coding agent has grown to an annualised revenue run rate of over US$2.5 billion (a figure cited by Anthropic itself, though the exact methodology is unclear as Claude Code access is bundled into broader Claude subscription tiers and API usage, making it difficult to isolate precisely) and more than doubling since the start of 2026 alone. Business subscriptions have quadrupled in the same period, and enterprise customers now account for more than half of Claude Code&#8217;s revenue. A recent SemiAnalysis report estimated that roughly 4 per cent of all public GitHub commits worldwide were authored by Claude Code, double the share from just one month earlier. The anecdotal evidence is just as striking, for example, Spotify co-CEO Gustav S&#246;derstr&#246;m told investors this week that the company&#8217;s best engineers have not written a single line of code since December, instead generating and supervising output through an internal system built on Claude Code.</p><p>The product that truly rattled global markets was Claude Cowork, a desktop productivity agent that extends Claude&#8217;s capabilities beyond coding into broader knowledge work. When Anthropic released eleven open-source, industry-specific plug-ins for Cowork on 30 January, covering legal, finance, sales, and data analysis workflows, the reaction was severe. Software stocks across the S&amp;P 500 fell sharply. Thomson Reuters and LegalZoom each dropped more than 15 per cent. FactSet fell around 10 per cent, and S&amp;P Global, Moody&#8217;s, and dozens of other enterprise software names suffered double-digit declines. The selloff spread globally, hitting Indian IT exporters, Japanese staffing firms, and Chinese software indices. According to J.P. Morgan, software stocks have now undergone their largest non-recessionary twelve-month drawdown in over thirty years, with approximately US$2 trillion in market capitalisation wiped from the sector peak. Nvidia CEO Jensen Huang called the panic &#8220;illogical,&#8221; arguing that AI will use existing software tools rather than replace them. Analysts at Wedbush cautioned that investors were pricing in a doomsday scenario far from reality. Gartner wrote that Cowork&#8217;s plug-ins are potential disrupters for task-level knowledge work but not a replacement for SaaS applications managing critical business operations. J.P. Morgan&#8217;s own strategists argued the selloff had gone too far, noting deeply embedded switching costs and strong earnings fundamentals across the sector. But Goldman Sachs strategist Ben Snider offered a more sobering view, warning this may be &#8220;the end of the beginning&#8221; and drawing parallels to the long decline of the newspaper industry. Jefferies coined the term &#8220;SaaSpocalypse.&#8221; Whether the market reaction is rational or not, it lays bare the depth of fear and uncertainty gripping the industry as many simply do not know how far or how fast these AI tools will reshape the landscape, and that uncertainty alone is enough to move trillions.</p><h2>Consumer Versus Enterprise</h2><p>One of the more notable dynamics in AI right now is the divergence between consumer and enterprise markets. In consumer AI, ChatGPT remains dominant but its lead is shrinking fast. Similarweb data from January 2026 puts ChatGPT at roughly 64.5 per cent of global generative AI web traffic, down from 86.7 per cent a year earlier. On a mobile app basis, Apptopia shows it falling from 69.1 per cent to 45.3 per cent. Google&#8217;s Gemini has been the main beneficiary, rising from 5.7 per cent to 21.5 per cent in web traffic and to 25.2 per cent on an app basis as reported by some sources. This has been powered by integration across Android, Chrome, and Workspace combined with an increasingly capable model. Gemini now has over 750 million monthly active users, closing in on ChatGPT&#8217;s estimated 810 million. Claude holds about 2 per cent of consumer traffic according to some sources, though it leads all competitors in time spent per daily active user at roughly 34.7 minutes. The enterprise market looks very different. According to Menlo Ventures, Anthropic overtook OpenAI as the top enterprise LLM provider in 2025 with 32 per cent of usage, versus OpenAI&#8217;s 25 per cent (down from 50 per cent in 2023). By late 2025, Anthropic&#8217;s share of enterprise LLM spend had risen to an estimated 40 per cent.</p><p>The split could potentially be explained by consumer AI being more of a distribution game where pre-installation, brand recognition, and free tiers matter most in the short term. Google&#8217;s ecosystem and OpenAI&#8217;s first mover advantage gives strong distribution points for Gemini and Claude that are hard to match, especially in the short-term. Enterprise AI can be seen as won on production reliability, safety, and handling complex workflows, where Anthropic has built a disproportionate presence relative to its consumer footprint. This is obviously a simplification, but I think a potentially useful one to try to understand the situation.</p><p>Enterprise adoption also appears further ahead in terms of clear value. Enterprise generative AI spending tripled to roughly $37 billion in 2025 according to Menlo Ventures, and the rationale is straightforward: models that can automate coding, draft documents, handle customer queries, and process data at a fraction of the cost of human labour represent an obvious productivity gain. The ROI case largely speaks for itself. On the consumer side, the picture is less clear. Most of what today&#8217;s chatbots offer, whether it be answering questions, summarising information, helping with writing, are things the average person can already do themselves well enough, or that free tiers handle to a good enough standard. The convenience gain exists, but it&#8217;s modest relative to what enterprises see, and it&#8217;s not yet obvious why hundreds of millions of people would pay a monthly subscription for it. This is starting to show in how companies are thinking about business models. OpenAI has begun exploring advertising, signalling that even the market leader sees limits to pure subscription growth at consumer scale. A search-adjacent, ad-supported model (closer to how Google has monetised) may prove a more natural fit for mass consumer AI than asking users to pay $20 a month. Consumer AI may well find its footing as capabilities improve and genuinely new use cases emerge, but for now the gap between what&#8217;s on offer and what feels essential in daily life remains wide, and the business model question is still very much open.</p><h2>Closing Remarks</h2><p>The fundraising arms race among AI companies reflects a simple reality, that training and running frontier AI models is extraordinarily expensive. These AI companies are raising more money than any private enterprise has ever raised before, and none of these companies is profitable. OpenAI reportedly lost approximately US$9 billion in 2025 (based on projections) on US$20 billion in revenue. Anthropic&#8217;s burn rate has been estimated at several billion dollars annually, though the company has indicated it expects to reduce losses substantially in 2026 and reach profitability by 2028.</p><p>These valuations defy every conventional metric. At US$380 billion on US$14 billion in run-rate revenue, Anthropic trades at roughly 27 times revenue. OpenAI, at US$500 billion on US$20 billion, trades at 25 times. At its rumoured US$830 billion target, the multiple would rise to over 40 times. For context, mature high-growth SaaS companies typically trade at 5&#8211;20 times revenue. And yes, I know P/S ratio is not that great an indicator many times, but it does give a general idea of valuations. Whether these valuations prove justified is the central question of the current technology cycle. The bull case is straightforward: if AI can automate a meaningful fraction of knowledge work, the addressable market is measured in trillions of dollars, and the companies that control the most capable models will be the platforms upon which that future is built. The bear case is equally clear: the capital intensity of training and inference is enormous, competition is fierce, open-source alternatives continue to improve, and the history of technology is littered with examples of early leaders who failed to capture the value they helped create.</p><p>The AI industry in early 2026 sits in an exciting but uncertain place. The infrastructure buildout is accelerating, with hyperscalers collectively planning to spend over US$660 billion on AI facilities this year alone. Both Anthropic and OpenAI are reportedly preparing for initial public offerings, which would subject their financials to the scrutiny of public markets for the first time. The software industry is grappling with what may be a once-in-a-generation disruption to its business model. And the question of whether LLMs are a stepping stone to something far more profound or a ceiling that will require fundamentally new approaches to surpass remains unresolved.</p><p>What is beyond dispute is the pace. Anthropic went from zero revenue to US$14 billion in annualised run rate in roughly three years. OpenAI reached US$20 billion in the same timeframe. These are growth curves that have no precedent in the history of enterprise software, and they are occurring against a backdrop of geopolitical competition, massive capital deployment, and a potential complete reinvention of the global workforce and jobs. The race to build artificial intelligence is one of the defining economic contests of the twenty-first century, and its outcome will shape industries, labour markets, and societies for decades to come.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-005-anthropics-380-billion?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-005-anthropics-380-billion?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-005-anthropics-380-billion/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-005-anthropics-380-billion/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[What to Do About PayPal?]]></title><description><![CDATA[Value Trap or Value Opportunity?]]></description><link>https://www.bonnefinresearch.com/p/what-to-do-about-paypal</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/what-to-do-about-paypal</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Thu, 12 Feb 2026 04:00:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0a29bbdc-1887-43f6-a07d-796ac0789445_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>PayPal has taken an extraordinary beating over the past few years as the share price has gone from a peak of $307.82 (closing price on 23 July 2021) all the way to $39.90 (closing price on 5 February 2026). The share price has even outperformed the market cap due to share buybacks, showing just how truly devastating this period of time has been for the valuation of the business. It&#8217;s been a staggering 87% drop in just four and a half years or so in the share price, and approximately 89% drop in market capitalisation. Following a 20% single day drop in the stock price last week, the question has to be asked: is this battered stock a potential value opportunity or a value trap disguised as a bargain?</p><p>With shares trading at a trailing P/E ratio around 7.5x and a forward P/E of approximately 7.5-10x (depending on earnings assumptions), the valuation appears extraordinarily cheap on the surface level. To give context, that is significantly lower than the current S&amp;P500 P/E ratio, which is sitting around 30 and its own 5 year average has been roughly 31x. However, unlike some value situations with tangible asset bases providing downside protection, for example, factories to liquidate, property portfolios to monetise, and inventory to sell, PayPal&#8217;s worth rests almost entirely on its ability to generate future earnings. There are no physical assets to fall back on, just software, brand equity, network effects, and an increasingly challenged competitive position. This fundamental situation transforms the valuation question from how tangibly cheap is it right now to will the business survive and generate sufficient cash to not only justify investment at these depressed share prices but provide adequate return?</p><h2>Business Overview</h2><p>PayPal operates a two-sided digital payments platform connecting approximately 439 million active accounts (as at Q4 2025) with over 30 million merchant accounts across roughly 200 markets. The crown jewel of its products is the branded checkout which nearly everyone should be familiar with, it&#8217;s the PayPal button at online checkout. Consumers select PayPal as their payment method on merchant websites, with PayPal handling authentication, fraud screening, and settlement. This carries higher take rates and margins than their unbranded processing. On top of this, their product lineup currently includes Venmo, Xoom, their Buy Now Pay Later (BNPL) product, PayPal Debit Card, PYUSD Stablecoin and other credit products.</p><p>PayPal&#8217;s revenue for 2025 was US$33.2 billion (up 4% year-on-year) with its revenue being primarily divided between transaction revenue and other value-added services (OVAS). Transaction revenue represents the lion&#8217;s share of revenue generating around $29.8 billion (~90% of total revenue) and OVAS generating $3.4 billion (~10% of total revenue). However, OVAS did grow at a much faster pace of ~14% compared to ~3% for transaction revenue YoY. Transaction revenue is derived from fees charged to merchants and consumers for payment processing, currency conversion, and instant transfers. The OVAS segment includes interest earned on customer account balances (the &#8220;float&#8221;), revenue from credit products (interest and fees on loans receivable), partnership revenue, and subscription fees. One part I find interesting is their interest on customer balances, which is a legacy feature dating back to PayPal&#8217;s earliest days, contributed approximately US$1.2&#8211;1.3 billion to OVAS in FY2025. It represents pure, high-margin passive income generated from investing funds held in user accounts and funds in transit. With 439 million active accounts, the company holds substantial customer balances at any given time. In a high-interest-rate environment, these idle funds generate significant passive income. The key mechanics are interest on customer balances as PayPal earns interest on the cash balances users hold in their accounts and transaction timing where funds in transit between parties, whether via ACH network (1&#8211;2 business day delays), escrow, or settlement windows, create additional float that PayPal can invest short-term. These customer balances effectively function like deposits at a bank, with PayPal investing them in low-risk instruments to generate a high-margin income stream entirely separate from transaction fees. However, this float income is largely outside management&#8217;s control, being driven by prevailing interest rates. For example, with rates expected to decline in 2026, this revenue stream faces headwinds.</p><p>However, as you would assume with the depressed stock price, not all is going well for PayPal and its future prospects. The biggest concern is that online branded checkout total payment volume (TPV) growth fell to just 1% on a currency-neutral basis in Q4 2025, down from 5% in Q3 and 7% a year earlier. This is the company&#8217;s highest-margin product and the metric that matters most. The slowdown reflects checkout friction, consumer fatigue, competitive encroachment from Apple Pay, Google Pay, and Shopify&#8217;s native checkout, and execution issues in merchant integration. This combined with take rate compression on transactions, increasing competition in the FinTech space, potential macro/consumer weakness in the &#8216;K-Shaped Economy&#8217;, tariff and trade disruption where platforms like Temu and Shein have been meaningful contributors to PayPal revenue, difficulty in rolling out new products into legacy ecosystems, and I&#8217;m sure there are a few more factors that have been weighing heavily on the business and as a result, its stock. It&#8217;s not all doom and gloom, with many of its newer and smaller businesses growing at decent rates, the issue is that its main business line is increasingly under threat.</p><h2>Financial Health</h2><p>One redeeming grace of PayPal, if treated as a value investment, is that it currently possesses financial stability. As a result, it doesn&#8217;t have a timeline in which you need to hope for a business turnaround or liquidation event before it goes bankrupt. PayPal&#8217;s balance sheet is unusual for a technology company because of the large pool of customer funds it holds, which inflate both assets and liabilities. As of Q4 2025, its most recent reporting period, PayPal maintains a solid balance sheet with total assets of approximately $80.2 billion against total liabilities of roughly $59.9 billion, leaving total shareholders&#8217; equity at around $20.3 billion. The company holds a substantial liquidity position with cash, cash equivalents and investments of approximately $14.8 billion, comfortably exceeding its total debt of around $10.0 billion. Working capital of around $12.7 billion further demonstrates the company&#8217;s strong short-term financial health and its ability to meet near-term obligations with a comfortable margin. The company reported tangible assets of approximately US$9.3 billion which when compared against a market capitalisation of around US$37&#8211;40 billion means there is no asset floor for a liquidation style investment thesis, and as mentioned before, means an investment thesis in PayPal should rationally rest on the company&#8217;s earnings power and cash flow generation.</p><h2>Earnings Power</h2><p>Revenue for PayPal grew 4% to US$33.2 billion and net income rose 26% to US$5.23 billion. Operating income grew 14% to US$6.1 billion (margins rose to 18.3%), and GAAP EPS grew 35% to US$5.41. The widening gap between 26% net income growth and 35% EPS growth is attributable to buybacks reducing the share count by 7%. It is important to note that some of these gains in income came from volatile crypto gains and a one-off tax benefit and on a non-GAAP basis, net income grew just 6% to US$5.14 billion.</p><p>Adjusted free cash flow came in at US$6.4 billion, down 3% from US$6.6 billion in FY2024, with the modest decline largely reflecting higher capex (up 25% to US$852M). Comparing this against a ~US$37 billion market cap that implies a ~17% FCF yield, which if sustained, would be extraordinarily high. Reported FCF looks worse at US$5.6 billion, and down 18% YoY, but the US$847 million gap between reported and adjusted FCF is almost entirely a timing distortion from the rapidly growing BNPL book where originations surged ~50% to US$36.7 billion, and sale proceeds hadn&#8217;t caught up by period-end.</p><p>So, it is pretty clear that PayPal is currently in a strong position from a balance sheet perspective and has decent recent earnings, albeit with minimal growth. This leads to the conclusion that to decide if this depressed stock price is fair, then it is the future earnings that will break or make an investment thesis on PayPal. However, trying to figure out the long term earnings power of PayPal requires much more than examining the recent numbers, it will require an understanding of their products and competition as ultimately it is the customers who dictate how much revenue a company receives, and it is the market and competition landscape that dictates much of the expenses and profit margin dynamics.</p><h2>Product &amp; Competition</h2><h3>Moat</h3><p>As the general thesis is that PayPal is under attack from intense competition in the FinTech space, let us first see just how defensible its position is as one of the leading payment processors. Does it have durable competitive advantages? This question is especially important in the case of PayPal at the prices it is currently trading at. Even if the company has minimal growth, if they can protect existing revenue streams and share, they will theoretically be able to return shareholders&#8217; money (at current prices) many times over with the cash they are generating over the next few decades.</p><p>In saying this, finding investment opportunities isn&#8217;t so easy, and PayPal gives a complicated answer to the question of moats as there are no clear, indisputable competitive advantages to carry it through a landscape of strong competition. On the scale end, PayPal benefits from scale in fraud detection (more transactions improve machine learning models), compliance infrastructure, and technology platform costs spread across US$1.9 trillion in total payment volume. However, payment processing is not a natural monopoly; Stripe, Adyen, and others have demonstrated that new entrants can achieve efficient scale relatively quickly. PayPal&#8217;s scale advantage is real but diminishing as competitors grow. Stripe processed US$1.4 trillion in TPV in 2024 alone.</p><p>Furthermore, PayPal was built on powerful network effects: more consumers attracted more merchants, and vice versa. Its core value proposition was serving as a trusted intermediary, allowing users to transact online without directly exposing their card details, a significant advantage when e-commerce checkout experiences were clunky and consumer trust was low. However, the increasing prevalence of tokenised card payments through Apple Pay and Google Pay has partially reduced this advantage. These device-native wallets are frictionless to set up, automatically available to nearly all smartphone users, and offer the same security benefits that once differentiated PayPal and all without requiring a separate account. As a result, PayPal&#8217;s network effects have weakened, particularly in mobile and in-app commerce, where biometric-authenticated payments are becoming the default consumer expectation. That said, the disintermediation is uneven: PayPal retains meaningful stickiness in desktop e-commerce, cross-border transactions, and peer-to-peer payments, while its two-sided transaction data enables adjacent services such as buy-now-pay-later and merchant lending that device-native wallets do not currently offer. Even then, these network effects are also dwindling as more competitors increase their market share in those fields. Network effects advantages have narrowed, but it has not disappeared.</p><p>PayPal is not strongly counter-positioned against competitors. In fact, newer players like Stripe have counter-positioned against PayPal by offering developer-first APIs, lower fees, and modern infrastructure that legacy PayPal cannot easily replicate without cannibalising existing revenue streams. Stripe&#8217;s willingness to invest heavily in developer tools and documentation while PayPal was milking its existing integrations is a textbook example of counter-positioning working against the incumbent.</p><p>Switching costs present another one of PayPal&#8217;s potential competitive advantage sources, but it has increasingly been competed away as well. Merchants using PayPal&#8217;s branded checkout face moderate switching costs due to integration, reporting, and consumer expectations. However, adding a competing checkout option (Apple Pay, Shop Pay) alongside PayPal is trivial with merchants simply adding another button. Consumer switching costs are low; there is nothing really preventing a PayPal user from paying via a linked card through Apple Pay instead, and in fact it is about as easy and quick as it comes in terms of switching products. Venmo has higher social switching costs among its user base due to the P2P network.</p><p>In terms of brand power, PayPal remains one of the most recognised digital payment brands globally. Studies suggest some groups of consumers trust PayPal more than their bank for storing payment credentials. However, trust alone does not prevent users from choosing faster, more convenient alternatives, especially when many of the alternatives have strong brand names (think Apple, Google, etc.). Brand strength provides a floor, especially among legacy users, as it seems among younger demographics, Venmo and Cash App carry stronger brand affinity than the PayPal parent brand.</p><p>In addition, PayPal does not realistically possess any meaningful cornered resource I can think of, especially since other payment processors have increased in scale and possess vast amounts of data comparable to PayPal. Another potential source of competitive advantage to explore is process power where PayPal certainly has a moderate degree of it. The company has decades of operational experience in regulatory compliance across 200 markets, risk management, KYC/KYB (know your customer/know your business), and global money movement. This institutional knowledge is genuinely valuable and difficult to replicate quickly, but is also not enough to be a long-term protector of the company.</p><p>PayPal&#8217;s moat is narrow and eroding. A durable moat, in my view, is a source of competitive advantage that enables a superior value proposition or product experience for the customer, and/or fundamental cost efficiencies for the firm, and in doing so sustains/grows market share, revenue, and profitability over time. By that measure, PayPal&#8217;s position has weakened considerably. It retains advantages in brand recognition, scale, regulatory process power, and moderate merchant switching costs, but these are increasingly structural legacies rather than active drivers of a superior customer experience. The competitive landscape has fundamentally shifted since PayPal&#8217;s unchallenged dominance of online payments in the 2010s: tokenised wallets, embedded checkout solutions, and buy-now-pay-later providers now deliver equal or better convenience, security, and value to consumers and merchants alike. PayPal&#8217;s remaining advantages are sufficient to slow decline, but no longer sufficient to sustain pricing power or above-market growth indefinitely without shrewd management and execution.</p><h3>Competitive Landscape</h3><p>PayPal still holds the largest global market share in online payment processing at approximately 43&#8211;46%, but this share is under sustained pressure. First, for merchants, Stripe now processes US$1.4 trillion in annual payment volume and offers a developer-first platform with superior API documentation, seamless integration, 135+ currencies, and a rapidly expanding suite of financial products (billing, treasury, capital, identity, tax). Stripe&#8217;s net revenue grew 27.5% in 2024 to US$5.1 billion, and it is valued at US$107&#8211;140 billion in private markets. Stripe servers 62% of Fortune 500 companies across all industries and for large merchants, Stripe and Adyen increasingly win on price, developer experience, and innovation velocity. Stripe charges a standard 2.9% + $0.30 per transaction, whereas PayPal&#8217;s fee structure is historically more complex and often more expensive. Second, as mentioned already, for consumers, Apple Pay and Google Pay have fundamentally changed the checkout experience. These wallet solutions are embedded in the device operating system, activated with biometrics, and require zero additional apps or account creation. PayPal&#8217;s branded checkout, by contrast, requires a redirect, login, and additional friction that reduces conversion rates relative to native wallets. There is nowhere, whether online payments, in-store tap-to-pay, or P2P transfers, where PayPal is demonstrably the cheapest or most convenient option for the end user. It is also worth noting the emerging competitive threat from real-time payment rails. While scale is still small, FedNow and The Clearing House&#8217;s Real-Time Payments Network are growing in transactions and users, with a &#8220;Request for Payment&#8221; feature enabling merchants to request instant bank-to-bank payments and bypassing PayPal entirely for certain transaction types.</p><p>Compounding these challenges, PayPal faces well-capitalised, specialised competitors across virtually every vertical it operates in. In buy-now-pay-later, it competes against Klarna, Afterpay, and Affirm, each of which has built dedicated brand recognition and merchant partnerships. In peer-to-peer payments, Venmo (which PayPal owns) faces Zelle, which is embedded directly into the banking apps of most major US financial institutions. In merchant services, Adyen and Stripe continue to take share. In in-store payments, Apple Pay and Google Pay dominate. PayPal is present in many of these categories but market-leading in none, and in each one it faces at least one competitor with a structural advantage, whether through device integration, banking relationships, or developer ecosystem. Without a cohesive platform strategy that ties these separate products into a unified ecosystem with compounding network effects, PayPal risks being a competent generalist that is steadily outcompeted by best-in-class specialists in every segment it operates in.</p><h3>Product Value Proposition</h3><p>At its core, PayPal&#8217;s value proposition is about selling trust and convenience in digital transactions. For consumers, it offers a single account that stores payment credentials, enabling them to pay online without exposing card details to individual merchants, backed by an industry-leading buyer protection. For merchants, it provides a recognisable checkout button that signals credibility and reduces purchase hesitation for buyers, particularly for first-time visitors to unfamiliar sites, effectively allowing merchants to borrow PayPal&#8217;s brand trust to convert customers they haven&#8217;t yet earned loyalty from. The core product value sits as an intermediary layer between consumers and merchants, monetising both sides through transaction fees.</p><p>Within that framework, PayPal retains genuine strengths and value in specific areas. Its buyer protection remains superior to anything offered by Apple Pay, Google Pay, or Stripe for marketplace transactions involving unknown sellers, and is widely cited by consumers as a key trust driver. Venmo, with approximately 100 million active US accounts, is a differentiated asset among younger consumers, and its commerce capabilities are accelerating (Pay With Venmo payment volume grew over 50% in early 2025). PayPal&#8217;s global reach across approximately 200 markets provides genuine breadth, and its BNPL offering, growing at over 20% annually, benefits from seamless integration into an existing checkout base of hundreds of millions of users. However, these value propositions advantages must be weighed against their eroding context. PayPal&#8217;s conversion claims, that businesses see 33% more completed checkouts derive from PayPal-commissioned research that predates the recent acceleration of biometric wallet adoption, where Apple Pay now achieves comparable or superior frictionless conversion. Venmo, while strong in peer-to-peer payments, faces Zelle, which is embedded directly into the banking apps of over 2,300 US financial institutions. And PayPal&#8217;s BNPL competes head-to-head with dedicated specialists like Klarna, Afterpay, and Affirm, each with focused brand positioning and merchant partnerships. Ultimately, PayPal&#8217;s value proposition is strongest among legacy users, older demographics, use cases requiring buyer protection, and markets where mobile wallet alternatives have less penetration, and weaker among tech-savvy consumers, developer-centric merchants, and markets where biometric wallets dominate checkout flow.</p><h3>The Legacy Problem/Opportunity</h3><p>PayPal&#8217;s current predicament is substantially self-inflicted. For years PayPal enjoyed a de facto monopoly on online checkout with the default payment button on virtually every e-commerce site being theirs. This dominance likely bred complacency. While Stripe obsessed over developer experience and Apple integrated payments seamlessly into its hardware ecosystem, PayPal underinvested in product innovation and relied on its incumbency.</p><p>Critically, PayPal&#8217;s historically high take rates created the very profit margin that new entrants used as their entry opportunity. Stripe, Adyen, and others recognised they could offer a materially better product at a lower price point and still build profitable businesses. As Bezos famously put it: &#8220;Your margin is my opportunity.&#8221; The result is a company that failed to anticipate or respond adequately to the twin threats of cheaper merchant processing and frictionless consumer wallets.</p><p>Leadership instability has compounded the challenges. Dan Schulman presided over the pandemic boom and subsequent decline. Alex Chriss was brought in to reignite innovation but departed after roughly two years. Now Enrique Lores, a hardware-industry veteran from HP, leads the company. Management has acknowledged that over 15 years of building custom, one-off integrations for individual merchants has left PayPal with significant technical debt with thousands of unique, outdated connections that slow down the rollout of new features and make it difficult to modernise the platform. Migrating these merchants onto a standardised, modern system is a delicate process, as any disruption risks pushing them to competitors. PayPal does not expect to begin retiring these legacy integrations until 2027 at the earliest. By contrast, Stripe and Adyen were built from scratch on clean, modern architectures and carry none of this baggage. Successfully turning around a company in this position requires simultaneously shipping new products faster, cutting prices, improving the user experience, and rebuilding legacy infrastructure, all potentially at the same time. This is possible in theory but extraordinarily difficult in practice, and the history of such attempts is littered with more failures than successes.</p><p>However, not all is necessarily lost, and the existing legacy user base is one of the biggest sources of potential for the company. As at Q4 2025, PayPal has 439 million active accounts (up 1.1% year-on-year) and 231 million monthly active accounts (up 1%). A bull case can be made that this base constitutes a durable revenue floor. PayPal&#8217;s users skew older and more habitual with many having used the platform for years and are unlikely to actively switch, not necessarily because PayPal is superior, but because switching requires conscious effort and the perceived benefits of changing payment methods remains low enough that inertia prevails. In markets like Germany and Brazil, and for marketplace transactions requiring buyer protection, PayPal remains deeply entrenched. Over 10 million live websites offer PayPal today. Even if growth stalls entirely, PayPal&#8217;s existing scale of 439 million active accounts processing $1.8 trillion in annual payment volume at a ~1.65% transaction take rate generated $33.2 billion in net revenue in 2025. This baseline provides a revenue floor that could support an investment thesis even without a successful turnaround, the question is whether they sustain this user base and transaction volume. Finally, the headline decline in transactions per active account (down 5% to 57.7 per year) is not as bad as it may initially seem. PayPal has been deliberately de-emphasising its low-margin unbranded processing business (Braintree/PSP). When this volume is excluded, transactions per active account actually rose 5%, indicating that PayPal&#8217;s core branded users are becoming more engaged.</p><p>Beyond mere retention, there are credible levers to extract more value from this installed base. As alluded to before, Venmo monetisation is the most tangible near-term opportunity with revenue growing 20% to US$1.7 billion. If commercialisation accelerates and Venmo successfully transitions into a full-service commerce and banking app, it could unlock billions in value. PayPal debit card users transact roughly 6x more than checkout-only accounts, suggesting that omnichannel initiatives could deepen engagement per user even without new account growth. BNPL expansion, at US$40 billion in TPV growing 20%+, represents momentum in a large addressable market, with the shift to BNPL (showing BNPL on product pages) potentially driving higher conversion, while omnichannel extension via debit card and Tap to Pay could open new offline volume (the US debit card saw 50%+ TPV growth). Looking further out, agentic commerce, where AI agents conduct transactions on behalf of consumers, is early-stage but potentially transformative. PayPal&#8217;s positioning as a trust layer with KYC/KYB capabilities for AI-agent payments is strategically sound, though execution is unproven. PYUSD stablecoin circulation grew 600% in 2025, and if blockchain-based payments scale meaningfully, PayPal&#8217;s first-mover advantage in stablecoin issuance among traditional fintech players could prove valuable. Fee restructuring remains an additional option, PayPal could cut fees aggressively to compete on price, sacrificing short-term margins for volume retention. Finally, at approximately US$37 billion market cap with US$6 billion+ in annual FCF, PayPal is theoretically an acquisition target for a large bank or technology company seeking payments infrastructure, providing a potential soft valuation floor. However, as previously mentioned, some of these initiatives face stern competition and incumbents, while some of the others are in highly speculative stages with little ability to determine who the winners and what the economics will be.</p><h2>Valuation</h2><p>By virtually every valuation metric, PayPal is trading at or near historically unprecedented cheapness. The trailing P/E of approximately 7.5x represents a significant discount to the five-year average of roughly 31x and an even larger one to the ten-year average of approximately 40x. The PEG ratio of approximately 0.83 sits below the traditional 1 threshold for an undervalued growth stock, suggesting that even after accounting for the company&#8217;s modest growth rate, implying the market may be underpricing its earnings trajectory. The FCF yield of ~17% is extraordinarily high for any profitable company, let alone one generating US$33 billion in annual revenue. Furthermore, PayPal currently delivers strong returns on capital (~26% ROE, ~24% ROIC), reflecting the inherent advantages of a capital-light, platform-based business model that doesn&#8217;t require heavy physical infrastructure to generate revenue. These numbers clearly demonstrate a business that delivers efficient capital growth and presumably high returns for shareholders from operations as every dollar of capital put into the company is producing a high return.</p><p>But yet again, the question is whether this large incumbent can maintain its market position going forward, as a business can only sustain high returns on capital when it offers products that are sought after.</p><p>A discounted cash flow analysis is particularly relevant for PayPal because, as established, there is no meaningful tangible asset floor, so the vast majority of value derives from future cash flows. Management has guided for at least US$6 billion in adjusted free cash flow for 2026 and so for the purposes of this analysis, I&#8217;m using approximately US$6 billion as the base-case annual FCF figure and also applying a 10% discount rate as a simple opportunity cost of capital. Under a conservative scenario with US$6 billion in annual FCF growing at 2% over a ten-year explicit forecast period, then declining at &#8722;2% to &#8722;3% in perpetuity thereafter, the present value of the business comes to approximately US$61&#8211;63 billion. This is roughly 65&#8211;70% above the current market capitalisation of approximately US$37 billion. Even under a simple perpetuity framework with zero growth (US$6 billion / 10%), the implied value is US$60 billion, still more than 60% above today&#8217;s price. A scenario modelling 5% annual FCF decline discounted at 10% yields approximately US$40 billion, suggesting the current market is pricing in something close to permanent, sustained cash flow erosion.</p><p>However, a DCF is only as good as its assumptions and often they should be used as a general indicator of direction and valuation, not as specific and accurate benchmarks, for there are far too many assumptions in the real world for them to be highly accurate. Investing is more art than science. If branded checkout continues to deteriorate, if Stripe and Apple Pay further erode PayPal&#8217;s position, and if management reinvests aggressively in turnaround efforts that destroy rather than create value, the actual cash flows available to shareholders could fall meaningfully below today&#8217;s US$6 billion run rate. The rescission of PayPal&#8217;s multi-year financial outlook at the Q4 2025 earnings call, which was replaced with single-year 2026 guidance only underscores the uncertainty that even management faces in forecasting the business trajectory. Investors must weigh the apparent margin of safety in the current valuation against the real possibility that the competitive dynamics of digital payments could compress PayPal&#8217;s earnings power faster than any static model anticipates.</p><h3>Management Capital Allocation</h3><p>A crucial variable for value realisation is management&#8217;s capital allocation discipline. PayPal is generating US$6+ billion in annual free cash flow. The question is: will this cash flow be returned to shareholders or reinvested in ways that destroy value? If you assume the base case of PayPal managing to defend and stabilise market position or ensure a slow and drawn out decline, but no substantial growth (which is probably the safest bet as complete collapse seems unlikely but explosive growth also seems out of the question for now based off existing evidence), then the question for shareholders is can PayPal management maintain profitability and distribute enough of it to shareholders to provide a direct return to those who hold its stock.</p><p>On the positive side, PayPal returned US$6 billion via buybacks in 2025 and plans to do the same in 2026. At a US$37 billion market cap, US$6 billion in annual buybacks represents roughly 16% of the company per year, an extraordinarily aggressive pace that mechanically supports EPS growth even if revenue stagnates. The initiation of a dividend, while small (US$0.14/quarter, ~1.4% forward yield), signals commitment to returning capital. Total shareholder yield (buybacks plus dividends) is approximately 16.3% &#8212; among the highest in large-cap technology.</p><p>On the risk side, management has guided for US$1 billion+ in capital expenditure and has indicated significant investment in branded checkout improvement, consumer rewards, co-marketing, and omnichannel expansion. If these investments fail to reverse the branded checkout decline, they represent destruction of shareholder capital. The history of large companies throwing good money after bad in turnaround efforts is extensive. Investors must assess whether they trust the incoming CEO to allocate capital wisely in an intensely competitive fintech landscape.</p><h3>Value Framework</h3><p>In traditional cigar butt framework, you buy something so cheap that there is one last profitable &#8220;puff&#8221; left even in liquidation. PayPal does not possess this characteristic. With net tangible assets of approximately US$9.3 billion, a liquidation would yield far less than the current market cap. There is no asset-based margin of safety. This is fundamentally a business where value resides entirely in future earnings and cash flow, not in the balance sheet.</p><p>However, PayPal is generating US$6 billion in free cash flow annually. If you can buy the company for US$37 billion and receive US$6 billion per year in cash flow (much of it being returned via buybacks), you are getting a 16% cash flow yield. Over a five-year horizon, even without any growth, the company would generate approximately US$30 billion in cumulative FCF which is nearly the entire current market capitalisation. If management returns the majority of this to shareholders through buybacks and dividends, the investment effectively pays for itself, with any years beyond that being tangible investment profit regardless of the market price performance of the stock.</p><p>The critical risk to this thesis is, firstly, that management reinvests the cash flow in value-destructive turnaround efforts, whether it be expensive marketing campaigns, acquisitions of dubious strategic value, technology investments that fail to produce returns, or more. Second, that the underlying business deteriorates faster than cash can be returned: if branded checkout goes negative, if the user base declines, and if competitive pressure compresses margins, the US$6 billion in annual FCF may not be sustainable over a five-year horizon and beyond.</p><p>This is the fundamental tension of the PayPal investment case: the business generates extraordinary cash flow today, but the trajectory of that cash flow is uncertain, and the management track record of allocating capital wisely is unproven. For a rational value investment to work, you need either a tangible asset floor (which PayPal lacks) or a strong enough business that it will continue generating profits to reward the investment. PayPal sits uncomfortably between these, it is strong enough to survive and generate cash for years, but possibly not strong enough to produce outsized investment gains or to overcome the compounding effects of a slowly eroding competitive position.</p><h2>Closing Comments</h2><p>PayPal at 7.5x earnings and a 15%+ free cash flow yield is more likely a value opportunity than a value trap, but with meaningful execution risk that demands disciplined position sizing. The downside from here is already relatively limited. If you believe PayPal&#8217;s fundamental survival is an ongoing concern, and it is highly unlikely that a company with $14.8 billion in cash, billions in free cash flow, and 439 million active accounts simply ceases to exist, then even a severely adverse outcome still leaves a business generating several billion dollars in annual revenue from its entrenched user base and legacy integrations leaving a margin of safety that even if the value of the company does decline, it shouldn&#8217;t go to zero.</p><p>If PayPal maintains the current status quo of flat revenue, stable margins, and continued cash return, it is potentially cheap and profitable here. The company generates roughly 15% of its entire market capitalisation in free cash flow annually and is returning the vast majority via buybacks and dividends. At 7.5x earnings, you are not necessarily paying for growth but are being paid to wait, albeit potentially with ordinary returns unless the company can sustain this type of dividend yield rates for decades, not years. If there is a genuine uptick, as branded checkout stabilises, Venmo scales further, or the new CEO delivers operational improvement, there is room for aggressive multiple re-rating. Even a modest re-rating to 12&#8211;15x earnings implies 60&#8211;100% upside. The asymmetry between limited downside and meaningful upside on any positive catalyst is the core of the value case.</p><p>However, this is not a long-term compounding asset with large structural growth prospects. Barring a spectacular turnaround, PayPal is unlikely to return to its former growth trajectory. The moat is narrow and eroding, the industry is moving toward cheaper and more convenient solutions where PayPal does not lead, and the management track record does not inspire confidence. This is realistically not a business you buy and forget about for decades.</p><p>As such, if you do make this investment, the entry and exit price must be clearly defined relative to your valuation. You need to know what you think PayPal is worth on a conservative basis, know at what price the risk/reward no longer favours you, and have the discipline to exit when your target is reached. This is a decent asset that is undervalued by many metrics, but, currently, it is an investment requiring a thesis on clear valuation benchmarks for action, not necessarily a permanent compounding holding.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/what-to-do-about-paypal?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/what-to-do-about-paypal?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/what-to-do-about-paypal/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/what-to-do-about-paypal/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Great Copper Investment Thesis]]></title><description><![CDATA[Breaking Commodity Cycles or Repeating Old Traps?]]></description><link>https://www.bonnefinresearch.com/p/the-great-copper-investment-thesis</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/the-great-copper-investment-thesis</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Thu, 05 Feb 2026 23:01:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d577012b-4796-4293-8b28-05c915a4f603_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While copper has been used by human civilisation for thousands of years, its importance has never been greater than it is today. This versatile metal forms the backbone of modern infrastructure, coursing through the electrical wiring of our homes, offices and factories, enabling power transmission across vast networks, and serving as a critical component in countless industrial applications from construction to telecommunications. The average home contains approximately 200 kilograms of copper in its wiring, plumbing and appliances, whilst a single electric vehicle requires nearly four times the copper content of a conventional petrol-powered car. From the heat exchangers in air conditioning systems to the circuit boards in our smartphones, copper&#8217;s superior electrical and thermal conductivity, combined with its durability and malleability, has made it indispensable to modern life.</p><p>However, what truly drives copper&#8217;s perceived investment thesis is not merely its current ubiquity, but rather the large surge in demand anticipated over the coming decades. The global transition towards renewable energy and electrification represents a generational shift that positions copper at the centre of the world&#8217;s most critical infrastructure projects. Wind turbines, solar panels and battery storage systems are all intensely copper-intensive, requiring several times more of the metal than traditional energy infrastructure. Similarly, the electrification of transport, the expansion of data centres to support artificial intelligence and cloud computing, and the modernisation of electrical grids to accommodate renewable energy sources all point towards an unprecedented increase in copper consumption. Some industry analysts project that global copper demand could increase by 50% or more by 2040, from 28 million metric tonnes in 2025 to 42 million metric tonnes. Whilst this is occurring, some also project that new mine supply will struggle to keep pace with this demand trajectory due to declining ore grades, lengthy development timelines and increasingly stringent environmental regulations. It is this structural supply-demand imbalance, set against the backdrop of the energy transition, that forms the foundation of copper&#8217;s current investment case.</p><p>So why is copper so important and commonly used? Well there are many reasons for that. Copper possesses a unique combination of properties that makes it nearly irreplaceable across modern industries. With the second-highest electrical and thermal conductivity of any metal (surpassed only by silver), copper excels at transmitting electricity and heat with minimal energy loss. Beyond its conductivity, copper demonstrates exceptional malleability and ductility, allowing it to be drawn into fine wires or shaped into complex forms without breaking. Its natural corrosion resistance ensures longevity even in harsh environments, whilst its antimicrobial properties actively inhibit bacterial growth. Unlike many metals, copper can be easily alloyed with other elements to create materials with enhanced strength, wear resistance, or specific characteristics. These superior properties combine with economic practicality to make copper the metal of choice for countless applications. Whilst silver offers slightly better conductivity, copper costs a fraction of the price, making it economically viable for large-scale infrastructure projects. Copper&#8217;s durability and corrosion resistance mean installations can last decades with minimal maintenance, reducing long-term replacement costs. Its recyclability is particularly valuable as copper can be recycled repeatedly without any loss of performance, with approximately one-third of global copper supply coming from recycled sources. The metal&#8217;s abundance relative to precious metals, combined with established mining and refining infrastructure, ensures stable supply chains. Furthermore, copper&#8217;s ease of fabrication reduces manufacturing costs, as it can be processed using conventional techniques without requiring specialised equipment or extreme conditions.</p><p>Approximately three-quarters of all copper production serves the electrical industries, manufacturing wires, telecommunication cables, and electronic components. The remaining quarter goes into alloys such as brass, bronze, and nickel silvers, as well as heat exchangers, plumbing systems, and industrial machinery.</p><p><em>Data Caveat: Apparent inconsistencies in copper industry data may arise because global production figures are derived from estimates, forecasts, and aggregated reporting across multiple agencies, industry groups, and national statistical bodies. There is no single global authority publishing a fully harmonised and independently verified dataset, and reported figures reflect differences in methodology, scope, reporting periods, and subsequent revisions.</em></p><h2>Future Demand</h2><p>The anticipated surge in copper demand stems from multiple complementary forces: stable baseline growth from traditional applications combined with several new demand vectors driven by technological advancement and electrification. Traditional applications (building construction, electrical wiring, industrial machinery, consumer electronics and plumbing) will continue to provide the foundation, with S&amp;P Global projecting core economic demand from construction, electric appliances, conventional vehicles, rail, shipping, aviation and power generation to reach 23 million metric tonnes by 2040, representing 53% of global consumption. However, the extraordinary growth that underpins copper&#8217;s investment thesis comes from new demand vectors layered on top of this foundation, most prominently the global energy transition.</p><p>Renewable energy infrastructure is dramatically more copper-intensive than traditional fossil fuel power generation, for example, a wind farm requires approximately nine times more copper per megawatt than a gas-fired power plant, whilst solar installations require around five times more. This stark difference arises from the fundamental physics and engineering of renewable systems. Fossil fuel plants are centralised, self-contained facilities where fuel is burned on-site to spin turbines, and as a result, the copper requirements are largely confined to the generator itself and the immediate connection to the grid. Renewable energy, by contrast, is inherently distributed: a wind farm might spread dozens or hundreds of turbines across vast areas, each requiring substantial copper cabling to connect to collection points, which then feed into substations that manage the variable output, with individual offshore wind turbines alone containing up to 24 tonnes of copper each. Solar installations similarly demand extensive networks of wiring to link individual panels into arrays, inverters to convert DC to AC power, and robust connections to handle fluctuating generation. Moreover, because wind and solar generate power intermittently and often in remote locations far from population centres (offshore wind farms, desert solar arrays) they necessitate massive investments in transmission infrastructure and energy storage systems, all of which are copper-intensive. Battery storage facilities, essential for balancing renewable energy&#8217;s variability, require significant copper for their internal connections and power management systems. S&amp;P Global projects that energy transition demand, encompassing electric vehicles, battery storage, renewable power capacity, transmission and distribution infrastructure, and electrification in developing countries, will see the largest growth, increasing by more than 7 million metric tonnes to reach 15.7 million metric tonnes by 2040. The electric vehicle revolution alone exemplifies this intensity: an EV requires 2.9 times more copper than a conventional internal combustion engine vehicle, approximately 80 kilograms compared to 23 kilograms.</p><p>Beyond the energy transition, additional technological and geopolitical demand vectors are emerging that compound the supply challenge. The computational infrastructure powering artificial intelligence and cloud services is expected to triple by 2040 as total installed capacity reaches 550 GW, more than five times 2022 levels as every data centre, every line of code, every AI model depending on copper conductors linking processors, memory and cooling systems. Rising international tensions and technological advancement in weapons systems could see defence spending double to US$6 trillion by 2040, adding further pressure to copper markets. A potential fifth vector, in the form of humanoid robots, could add another 1.6 million metric tonnes annually if 1 billion units achieve operational status by 2040, equivalent to 6% of current demand. This convergence of traditional baseline growth, energy transition imperatives, technological advancement and geopolitical pressures creates compounding copper demand, potentially reshaping the supply-demand dynamics of the global copper market.</p><h2>The Supply Landscape</h2><p>To understand why so many investors are projecting structural mismatch between demand and supply, one needs to look into the supply landscape. Assuming vast increase in demand for copper hold true, then there needs to be a supply increase to match this demand for prices to remain at consistent levels or else we may see rapid growth in the price of the mineral. However, many are sceptical about the ability for supply to keep pace. Understanding why there is a commotion about copper&#8217;s supply challenges requires examining the complex journey from ore to finished product. This multi-stage process involves geological, engineering, logistical, and regulatory hurdles that combine to create significant barriers to rapid supply expansion.</p><h3>Process</h3><p>Copper production begins with geologists searching for viable deposits underground. Modern ore typically contains less than 1% copper; the metal is chemically locked inside rock, not sitting as pure nuggets. The largest deposits are &#8220;porphyry&#8221; types, where tiny copper particles are scattered throughout massive rock formations. Mining methods depend on how deep the deposit sits: open-pit operations work for shallow deposits by digging down in giant terraced steps, whilst underground mining reaches deeper deposits through tunnel networks. Both require massive investment in equipment and skilled crews. Once mined, the copper ore is still 99% waste rock at this point. It gets crushed into powder fine as talcum, then goes into large flotation tanks. Special chemicals coat only the copper particles, making them repel water. When air bubbles through the tank, these waterproofed copper bits stick to the bubbles and float to the surface as froth, whilst plain rock sinks to the bottom. This separates out a copper concentrate that&#8217;s now 20-30% copper instead of 1%, a huge improvement, but still mixed with lots of other minerals. This concentrated material is what&#8217;s worth heating up in the next stage. The copper concentrate now goes into giant furnaces heated above 1,200&#176;C. At these temperatures, the copper partially melts and separates into two layers: heavier &#8220;matte copper&#8221; (about 60% pure) sinks to the bottom, whilst lighter waste floats on top as slag and gets poured off. This matte copper then goes into another furnace where oxygen is blown through it. The oxygen grabs the remaining iron and sulphur, turning the iron into more slag and releasing the sulphur as gas. What&#8217;s left is &#8220;blister copper&#8221; at 98-99% pure&#8212;almost there, but those last impurities would ruin copper&#8217;s ability to conduct electricity well. To get the final 1% of purity, the blister copper is shaped into thick plates and hung in tanks of acidic copper solution. Thin pure copper sheets hang between them. When electricity runs through the tank, copper atoms dissolve off the thick plates, travel through the liquid, and stick onto the thin sheets. Over two weeks, these sheets grow to 99.99% pure copper which is pure enough for electrical wiring. Everything else either drops to the tank bottom (including valuable gold and silver that get recovered separately) or stays dissolved in the liquid.</p><p>The entire process from discovery to production represents one of the copper industry&#8217;s most significant constraints. New copper mines take an average of 17 years from initial discovery to first production with a timeline that includes exploration, resource definition, feasibility studies, permitting, financing, construction, and commissioning. This extended development period means that theoretically, supply responses to secular price signals can lag demand by nearly two decades.</p><h3>Global Production Considerations</h3><p>Copper production is highly concentrated geographically. Just six countries account for roughly two-thirds of mining production, creating both efficiency and vulnerability in global supply chains. Chile remains the leader, producing approximately 5.3 million metric tonnes in 2024 which was roughly 23% of global output. State-owned Codelco and multinational firms like BHP operate massive facilities supported by decades of mining expertise and infrastructure. Peru ranks second with 2.6 million metric tonnes (11% of global production). The Democratic Republic of Congo produced 3.3 million metric tonnes in 2024 and Chinese investment has driven rapid expansion through joint ventures and wholly-owned operations. China produced 1.8 million metric tonnes from domestic mines in 2024, more significantly, China dominates refining and smelting, accounting for approximately 40-50% of global smelting capacity and producing 12 million metric tonnes of refined copper (44-57% of global refined production). China imports massive amounts of copper concentrate, particularly from Chile, Peru, and the DRC, giving it outsized influence over global copper markets. United States produced 1.1 million metric tonnes in 2024, with major operations in Arizona, Utah, New Mexico, Nevada, and Montana. American production has remained relatively stable, though the country designated copper as a critical mineral in November 2025, recognising its strategic importance. Other significant producers include Indonesia, Australia, Russia, Kazakhstan, and Mexico. This geographical concentration creates several systemic risks. Supply chain vulnerability to regional disruptions, whether it be from natural disasters, political instability, labour disputes, or regulatory changes, can rapidly tighten global markets. China&#8217;s dominance in refining creates particular dependencies, as 66% of copper concentrate imports flow to Chinese smelters. Trade barriers, tariffs, or policy shifts in any major producing nation can cascade through global supply chains.</p><h2>The Bull Case</h2><p>The bull case for copper rests on the argument that this supply shortage differs fundamentally from historical commodity booms and busts. Multiple converging factors suggest supply will remain constrained for decades, creating a prolonged period of structural deficits.</p><p>Unlike typical cyclical industrial demand, the current copper consumption surge stems from policy commitments to decarbonisation and technological transformation that governments worldwide have set in law and international agreements. Countries have committed to net-zero emissions targets, with intermediate milestones that require massive deployment of copper-intensive technologies. In other words, this energy transition is not a choice where companies can easily decommit or markets can counteract, rather it is mandated by layers of laws and government commitments. This creates more security in ensuring long term demand growth for copper and dampening the chances of sudden and significant trajectory changes in demand. BloombergNEF forecasts that energy-transition demand for copper will triple by 2045, driven by policy-backed deployment of electric vehicles, renewable energy, and grid infrastructure. The International Energy Agency estimates global electricity demand will grow nearly 50% by 2040 compared to 2025 levels, with more than 90% of new power generation capacity in 2025 coming from solar and wind which are both highly copper-intensive. Furthermore, this demand isn&#8217;t subject to normal price elasticity, although it is certainly still a large factor. Governments have allocated trillions in subsidies and mandates like the European Green Deal, and China&#8217;s electrification push all provide a strong backstop of demand regardless of copper prices rising substantially.</p><p>Perhaps the most compelling argument for prolonged shortage is the fundamental time asymmetry between demand growth and supply response. As previously mentioned, new copper mines require an average of 17 years from discovery to production. The extended timeline for bringing a copper mine to operation reflects several complex, sequential hurdles. First, exploration and resource definition can take 5-7 years, as companies must conduct extensive drilling and geological surveys to prove a deposit is economically viable. Then, feasibility studies, environmental impact assessments, and securing permits typically consume another 5-8 years, particularly as environmental regulations have tightened and community consultation processes have become more rigorous. Finally, construction itself requires 3-5 years for building infrastructure like processing facilities, tailings dams, and access roads. This timeline has lengthened significantly over recent decades due to stricter environmental standards, declining ore grades (requiring larger-scale operations), mines being located in increasingly remote areas, and more complex stakeholder engagement processes with local communities and indigenous groups. Even if regulatory approval accelerated tomorrow, a slower period of copper mining growth is already &#8220;built in&#8221; for the next decade. S&amp;P Global&#8217;s January 2026 study projects that total global copper production (mined plus recycled) will peak at 33 million metric tonnes in 2030, then decline to 32 million metric tonnes by 2040 without significant new investment. Primary mined supply specifically is expected to peak at 27 million metric tonnes in 2030 before falling to just 22 million metric tonnes by 2040. Meanwhile, demand is projected to reach 42 million metric tonnes by 2040, creating a supply gap of more than 10 million metric tonnes even after accounting for recycled copper more than doubling from 4 million to 10 million metric tonnes. However, you would obviously assume that if the demand is there, this theoretical shortage should not be a real issue, as higher prices would likely incentivise new mine development and encourage substitution in some applications.</p><p>Bernstein research suggests supply will only catch up to demand around 2040, assuming aggressive mine development beginning immediately. To meet projected demand, the industry would need to open three &#8220;tier-one&#8221; mines (each with annual capacity of 300,000 metric tonnes) every year for the next 14 years, which would be a historic expansion requiring over US$500 billion in capital investment. Goldman Sachs research found that regulatory approvals for new copper mines have fallen to their lowest level in 15 years. Environmental scrutiny, water rights, indigenous land claims, community consultations, and biodiversity assessments all extend permitting timelines. Whilst these protections serve important purposes, they create a regulatory barrier that significantly limits supply growth. In some jurisdictions, permitting alone can take 7-10 years even before construction begins. Community opposition, environmental litigation, and political changes can halt projects already years into development. The regulatory environment shows no signs of easing in most developed nations.</p><p>On top of these headwinds, the industry faces a geological headwind: average ore grades are declining globally as shallow, high-grade deposits become exhausted. Where mines once processed ore containing 2-3% copper, many now work with less than 0.5% copper content. This means moving and processing vastly more rock to produce the same amount of metal.</p><p>Processing lower-grade ore requires more energy, more water, more reagents, and more infrastructure and doing all this whilst producing more waste and tailings. These factors combine to push production costs higher, even as technological improvements deliver incremental efficiencies. Rising energy prices, labour costs, and input costs further compress margins at lower copper prices.</p><p>Although recycling (&#8221;aboveground mining&#8221;) will play an increasing role, even optimistic projections show secondary supply doubling to 10 million metric tonnes by 2040, which is still insufficient to close the supply gap as this is already included in the current S&amp;P Global production projections. Scaling up recycling faces its own constraints as well. Not all copper applications allow easy recovery. Collection infrastructure must be built. Recycled copper still requires energy-intensive reprocessing. Most critically, the stock of copper available for recycling depends on what was installed decades ago, and demand is growing faster than the existing stock. The copper available for recycling today is mostly the copper that was installed in buildings, power grids, and machinery 20 to 40 years ago however, because global copper consumption was significantly lower in the 1980s and 1990s than it is today, the total &#8220;stock&#8221; reaching its end-of-life is insufficient to meet modern demand levels.</p><p>Furthermore, while some alternatives exist for some copper applications, S&amp;P Global notes that most feasible substitutions have already been exhausted, &#8220;leaving the metal largely irreplaceable for electrification, data centres, and advanced technology.&#8221; Aluminium can replace copper in some uses, but its conductivity is only 60% that of copper, requiring larger, heavier conductors. For many applications, particularly those where space, weight, or performance are critical, copper has no current viable substitute at scale.</p><p>The effects of all of this are already showing in current market conditions as LME warehouse stocks fell below 100,000 tonnes in December 2025, signalling acute near-term scarcity. Copper prices reflect this tightness as mounting supply concerns, depleted inventories, and accelerated stockpiling drive prices up. After rising 35% in 2025 (the largest annual gain since 2009), copper hit record levels above US$13,000 per metric tonne in early January 2026.</p><h2>The Bear Case</h2><p>While the bull case may seem extremely appealing too many, there exists the bear case, as with all commodity type products, which cautions against extrapolating current constraints indefinitely. History demonstrates that commodity markets possess remarkable adaptive capacity through price signals, innovation, and substitution. Several factors could prevent or substantially mitigate the projected supply crisis, and as a result, suppress the price of copper.</p><p>Commodity markets have a well-established pattern: high prices and profits attract massive capital investment, which leads to oversupply and price collapse. Copper has experienced this cycle repeatedly throughout history. The concern that this time will be &#8220;different&#8221; has proven premature many times before. Currently, copper miners are enjoying near-record prices and strong margins. This should trigger a supply response as companies accelerate development of marginal projects, expand existing operations, and explore aggressively. Whilst the 17-year development timeline constrains immediate response, capital is patient when returns are high. Investors and operators all possess the resources to fund long-duration projects if fundamentals justify the investment. The current high-price environment may already be laying the foundation for future oversupply, just as it has in previous cycles. Projects sanctioned today at US$13,000 per tonne copper may flood the market in the 2030s, precisely when bulls project maximum scarcity.</p><p>One of the bull case&#8217;s key pillars, the regulatory barriers to mine development, rests on the assumption that current political priorities will persist for decades. This seems unlikely. If copper shortages genuinely threaten economic growth, energy transition goals, national security, or living standards, governments possess the ability to streamline permitting dramatically. Copper is now designated a critical mineral in multiple countries. During World War II, regulatory barriers evaporated when materials became strategic priorities. Chile&#8217;s July 2025 licensing reforms demonstrate that change is possible when political will exists. If copper shortages materially impact electricity costs, vehicle prices, or infrastructure deployment, expect regulatory reforms across jurisdictions. Emergency provisions, fast-track permitting, and exemptions from normal review processes, combined with general urgency in operations and investment, could compress 17-year timelines to 5-7 years. The 17-year average reflects current political preferences, not necessarily fixed constraints.</p><p>The bull case also hinges on a lack of substitutions for copper and while this is a reasonable bet to make, it is certainly not a guarantee. This assessment may underestimate the innovation that high sustained copper prices would trigger. At US$10,000-15,000 per tonne or beyond, enormous incentives exist to find alternatives to copper.</p><p>Aluminium represents the most obvious substitute. With 60% of copper&#8217;s conductivity, aluminium requires larger cross-sections but costs half as much and weighs one-third as much. Research by Pacific Northwest National Laboratory and Ohio University has produced the first-ever simulation of aluminium conductivity at the atomic level, identifying pathways to increase conductivity through structural modifications and additives. If aluminium&#8217;s conductivity could be increased even modestly, it would become viable for many applications currently requiring copper. Aluminium already serves as a copper substitute in power transmission, where its light weight allows longer spans between towers. It&#8217;s widely used in solar panel frames, offshore wind turbine structures, and some electrical wiring. Automotive manufacturers are researching aluminium for EV motor windings, where a small efficiency loss is offset by weight savings. An estimated 90% of copper could theoretically be replaced by aluminium, although this is quite optimistic for aluminium.</p><p>Carbon Nanotubes (CNTs) represent a more radical alternative. Continuous CNT cables several kilometres long have been produced, offering extraordinary properties: higher conductivity than copper, superior thermal performance, extreme tensile strength (10 times that of steel), half the weight of aluminium, and 100 times the flex life of copper. Companies like DexMat are commercialising CNT-based materials called Galvorn for niche applications, with government and venture capital funding supporting scale-up. Whilst CNT production remains expensive and limited, technology curves suggest costs could fall dramatically with scale, potentially disrupting copper markets.</p><p>Advanced Composites combining different materials could leverage each material&#8217;s strengths whilst minimising weaknesses. Steel-core aluminium conductors already serve high-tension transmission. Further innovations in composite conductors could expand their application range.</p><p>Superconductors, whilst requiring cryogenic cooling, eliminate resistance entirely. Advances in high-temperature superconductors and more efficient cooling systems could make them economically viable for certain applications, particularly data centres and urban distribution where cooling infrastructure costs can be amortised across many users.</p><p>The key insight is that current substitution assessments assume today&#8217;s technology and prices. At US$20,000 per tonne copper (a plausible bull-case scenario), the economics of alternatives shift dramatically, accelerating research, development, and deployment.</p><p>Moving beyond the threat of substitutes, the bull case treats copper demand as price-inelastic, driven by policy mandates. However, high copper prices will inevitably, sooner or later, feed through to end-product costs. If EVs, renewable installations, and electronics become significantly more expensive, deployment rates could slow, policy timelines could extend, or alternative approaches could gain favour. Policy goals are typically stated in terms of outcomes (emissions reductions, renewable capacity) rather than specific technologies. If copper constraints make solar panels prohibitively expensive, policy might shift towards nuclear power, which requires far less copper per megawatt. If EV costs rise, policy might pivot towards public transport, which serves more people per kilogram of copper. Engineering efficiency also responds to price signals. Copper windings can be optimised, conductor cross-sections minimised, and designs improved to reduce copper intensity per unit of output. These incremental improvements, multiplied across billions of devices, could substantially reduce aggregate demand.</p><p>Another threat against the bull case exists in the form of innovations in extraction and processing. High prices tend to draw funding for innovation and competition, and new extraction and processing methods. McKinsey has identified several promising technologies already in development. These include better ways to capture copper particles that currently get lost in processing, new chemical methods for extracting copper from sulphide ores, and artificial intelligence systems that can optimise every step of the extraction process to squeeze more copper from each tonne of rock. But the really game-changing possibilities go further. In-situ leaching, which is essentially dissolving copper underground and pumping it to the surface, could work on deposits that can&#8217;t be mined conventionally. Scientists are experimenting with bacteria that naturally concentrate copper, turning what sounds like science fiction into a practical mining method. Even old waste piles from past mining operations could be worth reprocessing with better technology. More controversially, companies are eyeing copper-rich nodules sitting on the deep ocean floor. And while it sounds far-fetched, some are even thinking decades ahead to mining asteroids in space.</p><p>The crucial point is this: we can&#8217;t predict when breakthroughs will happen, but history shows they tend to arrive precisely when they&#8217;re needed most. High copper prices create powerful financial incentives for innovation and even if there are no direct copper substitutes discovered, the process of creating refined copper may improve significantly itself. When there&#8217;s serious money to be made solving a problem, human ingenuity has a remarkable track record of finding solutions. Necessity is the mother of invention.</p><p>A further point for the bear case lies in that whilst recycling cannot immediately fill the supply gap, infrastructure to improve collection, sorting, and reprocessing can scale faster than new mines can open. Policies mandating product take-back, deposit schemes for electronics, and urban mining initiatives can substantially increase recycling rates. Copper&#8217;s value provides strong economic incentives for recovery. Unlike plastics, copper recycling is profitable at moderate scale. Every tonne recycled reduces the need for mined copper by one tonne, with far lower energy requirements and environmental impact.</p><p>In addition, economic headwinds could significantly dampen demand, undermining predictions of sustained price increases. This matters particularly because much of copper&#8217;s current demand comes from traditional uses not just the green energy transition. Construction, manufacturing, and general infrastructure still account for the majority of copper consumption globally. These sectors are business-cycle cyclical and sensitive to economic conditions; during recessions or slowdowns, construction projects get delayed, manufacturing contracts, and infrastructure spending often gets cut. As an example, China, as the world&#8217;s largest copper consumer, accounting for roughly 50% of global demand, China&#8217;s economic trajectory is crucial to any demand forecast. The country&#8217;s property sector alone has been a massive copper consumer for decades with wiring, plumbing, and air conditioning systems for countless apartment blocks and commercial buildings. Yet this sector is now contracting amid ongoing challenges with developer debt, demographic pressures, and changing government priorities. If China&#8217;s broader economic growth continues to slow, whether from property sector weakness, an ageing population, or reduced manufacturing activity, global copper demand takes a substantial hit regardless of how many electric vehicles are being built. The electrification story represents future demand growth, but it&#8217;s being layered onto a base of traditional demand that remains vulnerable to economic cycles. If economic headwinds cause traditional demand to soften significantly while new mines come online and recycling expands, the severe shortages that bullish forecasts predict may never materialise. The copper market could find itself adequately supplied, or even oversupplied, preventing the sustained price spikes that long-term bulls are anticipating.</p><p>Separately, Western nations have recognised the strategic vulnerability created by China&#8217;s dominance in copper refining (44% of global capacity). In response, governments in North America, Europe, and Australia are actively investing in domestic refining capacity through subsidies, loan guarantees, and public-private partnerships. This diversification effort addresses a genuine supply chain risk: even if mining output increases, inadequate refining capacity could create bottlenecks that limit the availability of finished copper. By building alternative refining infrastructure, Western nations are both securing their supply chains and adding global processing capacity that could ease potential constraints in the refining stage, ensuring that raw copper ore can be efficiently converted into the pure metal that manufacturers require.</p><p>All in all, the bear case against copper lies in the long-term cycle patterns of commodities. Ultimately, copper is a commodity without moat or durable competitive advantage. Whilst specific mines or positions in the value chain confer temporary advantages, the product itself is undifferentiated. This means that sustained high prices will eventually attract sufficient capital to resolve shortages. Historically, this tends to be the fundamental dynamic of all commodity markets. Unlike technology products with network effects or intellectual property protection, there are less barriers to prevent new entrants from developing copper mines, smelters, or substitutes if returns justify the investment, and especially if there is government support.</p><h2>The Investment Situation</h2><h3>General Outlook</h3><p>In terms of the short-term situation, copper reached record highs above US$13,000 per metric tonne in early January 2026, surging over 40% from the previous year. This rally has sparked differing views on price sustainability. Goldman Sachs expects global copper markets to remain in surplus through 2026 (300kt), with high prices dampening demand and boosting scrap supply. They forecast prices declining to US$11,000 per tonne by year-end, down from the January peak. Notably, major copper mining companies are pricing in US$5.49 per pound versus the current US$6.50, suggesting significant market scepticism about sustained high prices. Many analysts believe current prices have overshot fundamentals. Natalie Scott-Gray of StoneX noted: &#8220;While we have copper in a deeper deficit market year on year in 2026, we still do not see the market as historically out of balance... fundamentals certainly do not support copper at the current levels.&#8221; Steel&#8217;s muted reaction to tariff concerns when compared to copper&#8217;s frenzy suggests speculation may be driving copper&#8217;s premium. US tariff policy remains a wild card. Goldman Sachs expects a 15% tariff on refined copper by mid-2026, though election-year affordability concerns could delay this. Tariff uncertainty is supporting prices as US buyers front-load purchases. Meanwhile, Chinese demand for refined copper fell 8% year-on-year in Q4 2025 as stimulus effects faded.</p><p>The critical long-term investment question centres on two interrelated uncertainties: will long-term demand materialise as strongly as projected, and can supply keep pace if it does? On the demand side, Goldman Sachs Research expects demand for copper to overtake supply from 2029 onwards, with the grid and other power infrastructure projected to drive more than 60% of copper demand growth until 2030; adding the equivalent of another US in copper demand. Data centre demand growth remains an upside risk, with demand from data centres alone potentially reaching 475,000 tonnes in 2026, up from 2025&#8217;s 110,000 tonnes. However, these projections assume continued aggressive AI infrastructure buildout and electrification at current pace. Economic slowdowns, policy shifts, or technological changes could moderate growth. On the supply side, the picture remains constrained but not static. Countries including Chile, the Democratic Republic of Congo, Brazil, and Iran are expected to push global output up by 2.3% for 2026 against 2025&#8217;s growth of 1.2%. A litany of expansions and new mining projects are already starting to operate, though not at a pace sufficient to eliminate deficits. History suggests that sustained high prices trigger massive capital deployment, regulatory adjustments, and technological innovation. The question is timing: will shortages come at all or will they persist for 5 years, 10 years, or 15 years before supply catches up?</p><h3>Exposure Options</h3><p>Investors seeking copper exposure have several vehicles, each offering different risk-reward trade-offs. Major Mining Company Equities provide leveraged exposure to copper prices through operational gearing. Companies like Freeport-McMoRan (pure-play copper exposure), BHP Group (diversified with lower volatility), and Southern Copper (highest margins, lowest costs) offer direct participation in copper economics. Mining equities benefit from operational leverage, if copper prices rise 30%, company earnings might increase 60-100% or more due to fixed cost structures. However, they also face company-specific risks including operational disruptions, cost inflation, management execution, and capital allocation decisions. Forward P/E ratios ranging from 13x (BHP) to 27x (Southern Copper) reflect varying quality and growth expectations. Copper Mining ETFs such as COPX, ICOP, and COPP provide diversified baskets of copper producers with expense ratios of 0.47-1.06%. These vehicles reduce single-company risk whilst maintaining copper exposure, though they may underperform spot copper prices due to operational issues across holdings. ETFs work well for investors wanting sector exposure without individual stock selection. Copper Futures ETFs like CPER offer direct price exposure without equity market correlation or company-specific risks. However, they&#8217;re designed for short-term tactical trades rather than long-term holds due to roll yield effects from futures term structures. As futures contracts approach expiration, the ETF must &#8220;roll&#8221; into new contracts; when future-dated contracts are more expensive than near-term ones, this rolling process creates losses that erode returns over time, even if copper prices rise. Higher expense ratios (1.06%) and tracking complexities make them less suitable for buy-and-hold investors. Physical Copper remains impractical for most investors given storage costs and lack of accessible vault products, unlike precious metals. Junior miners and explorers provide asymmetric upside potential (potentially multibagger returns if projects succeed) but carry substantial risks from development failures, financing challenges, and permitting obstacles. These speculative positions require careful sizing and due diligence.</p><h3>Investment Considerations</h3><p><em>*Projections are for informational purposes only.</em></p><p>The copper investment thesis must be evaluated not merely on its own merits but against alternative uses of capital and the returns required to justify the risks undertaken. If copper reaches US$15,000-20,000 per tonne by the late 2020s as structural deficits materialise, representing 15-50% upside from current US$13,000 levels, mining company equities could deliver substantially higher returns through operational leverage. A pure-play producer might even see earnings double or more at US$18,000 copper versus US$13,000. While exact numbers are hard to say, and one should always do their own valuation research and analysis whether it be through multiples valuation, DCF model etc., if earnings were to jump this significantly, one could assume a higher than equity market average return. However, this bull case requires multiple conditions to hold: demand must grow as projected, supply constraints must persist, no major substitutes can emerge, and macroeconomic conditions must remain supportive. The probability-weighted expected return depends on one&#8217;s conviction in each assumption.</p><p>If copper prices decline toward US$9,000-10,000 per tonne as supply responds and demand disappoints&#8212;a 20-30% downside from current levels&#8212;mining equities could fall 40-60% or more due to reverse operational leverage and margin compression. Even if prices remain stable near US$11,000-12,000 (Goldman Sachs&#8217;s base case), equity returns might simply track broader market averages or underperform if operational issues emerge. In this scenario, investors achieve mediocre single-digit returns whilst bearing substantial commodity and operational risks.</p><p>Capital allocated to copper carries opportunity cost measured against alternative investments. As a reminder, the S&amp;P 500 has delivered approximately 10% annualised returns historically. Government bonds currently yield 4-5% with minimal risk. Technology stocks, whilst volatile, have outperformed materials sectors over most timeframes. Real estate, other commodities, and international equities all compete for investor capital.</p><p>The copper thesis is fundamentally long-term, requiring 5-10 years for structural deficits to fully materialise. Shorter time horizons face near-term volatility from speculation, economic cycles, and Chinese demand fluctuations. As mentioned before, Goldman Sachs expects prices to decline through 2026 before structural deficits emerge post-2029.</p><p>Compared to other commodities, copper presents unique characteristics. Unlike oil (subject to geopolitical shocks and OPEC manipulation) or agricultural commodities (weather-dependent), copper&#8217;s supply-demand dynamics are relatively transparent and forecastable. However, unlike precious metals, copper generates minimal portfolio insurance value during crises. Gold rallies during market stress; copper tends to fall with economic activity. This limits copper&#8217;s portfolio diversification benefits.</p><p>Perhaps the most critical consideration is how much future growth the current US$13,000 copper price already reflects. If markets have efficiently priced in 80% of the bull case, limited upside remains whilst downside risks loom large. Conversely, if current prices reflect only modest supply tightness, substantial upside exists. Goldman Sachs&#8217;s view that miners are pricing in US$5.49 per pound versus the current US$6.50 suggests significant optimism is already embedded in equity valuations, though copper bulls would argue the market remains sceptical of structural deficits.</p><h2>Navigating Uncertainty</h2><p>The copper market faces genuine uncertainty. The bull case presents compelling evidence for a prolonged structural deficit: demand is surging due to policy-driven electrification, supply growth faces unprecedented geological and regulatory constraints, the 17-year development timeline creates enormous lag in supply response, and substitutes remain limited at scale. Current market conditions in the form of record prices, depleted inventories, and supply disruptions support this narrative.</p><p>Yet the bear case offers important cautionary lessons from commodity market history. High prices inevitably trigger supply responses, even if delayed. Regulatory barriers that seem immutable can vanish when political will exists. Substitution potential may be far larger than current assessments suggest, particularly as innovation accelerates under price pressure. Demand proves more elastic than forecast when costs rise. And fundamentally, no commodity shortage persists indefinitely because markets adapt.</p><p>The most likely outcome may lie between the extremes. Copper could experience a prolonged period of elevated prices and periodic scarcity through the late 2020s and early 2030s as demand surges faster than new supply arrives. This would create genuine constraints on electrification and technology deployment, raising costs and potentially slowing the energy transition. Prices could reach US$15,000-20,000 per tonne or higher during acute shortages.</p><p>However, these high prices would simultaneously accelerate all the bear-case dynamics: aggressive mine development, regulatory streamlining, substitution research and deployment, recycling infrastructure, and efficiency improvements. By the late 2030s and 2040s, when the Bernstein research suggests supply catches up, the market could shift from scarcity to balance or even glut, particularly if demand growth disappoints due to economic slowdown, technological shifts, or policy changes.</p><p>For investors, copper presents both opportunity and risk. The structural shortage narrative could drive sustained high prices and strong returns for producers. Yet commodity market history urges caution against assuming shortages persist indefinitely. The eventual supply response, when it arrives, could be swift and overwhelming.</p><p>For society, copper scarcity represents a challenge to the pace of energy transition and technological change, but not an insurmountable barrier. Human ingenuity, market forces, and adaptive capacity have solved similar challenges throughout history. The question is not whether copper constraints will ultimately be overcome, but rather how long the adjustment takes, how much it costs, and what innovations emerge along the way.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-great-copper-investment-thesis?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-great-copper-investment-thesis?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-great-copper-investment-thesis/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-great-copper-investment-thesis/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[SpaceX / xAI Merger]]></title><description><![CDATA[Go $1.25 Trillion or Go Home]]></description><link>https://www.bonnefinresearch.com/p/note-004-spacex-xai-merger</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/note-004-spacex-xai-merger</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Thu, 05 Feb 2026 10:15:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5a589b00-3799-4290-8ddd-71aa650fbe04_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In one of the biggest deals of all time, Elon Musk&#8217;s corporate empire has taken its most audacious turn yet. SpaceX&#8217;s acquisition of xAI creates a newly merged entity valued at $1.25 trillion with SpaceX being valued at $1 trillion and xAI at $250 billion, consolidating two of Musk&#8217;s most ambitious ventures under one roof. This deal is, in theory, a bet that aerospace dominance and artificial intelligence leadership can be welded together into something greater than the sum of their parts. Yet the merger also raises fundamental questions about whether combining a sector-leading rocket company with an AI startup in an extremely competitive environment makes strategic sense, or whether it merely concentrates risk within Musk&#8217;s interconnected business ecosystem.</p><p>SpaceX has spent two decades establishing itself as the undisputed leader in commercial spaceflight. Founded in 2002, the company revolutionised the aerospace industry through reusable rocket technology, vertical integration and innovative design/manufacturing practices, slashing launch costs and securing lucrative contracts with NASA and the Department of Defence. SpaceX&#8217;s Starlink satellite internet constellation has become a behemoth itself with around 10,000 of the approximate 15,000 active satellites in space operated by them, which in turn has created a revenue stream worth billions. All of this is occurring while the Starship programme promises to enable Mars colonisation and deep-space exploration (which has a far more uncertain chance of success). Recent reports suggested SpaceX was valued at approximately $800 billion in private markets before this merger, with persistent rumours of IPO discussions.</p><p>xAI, by contrast, represents a more precarious venture. Founded in mid-2023 as a direct competitor to OpenAI (the company Musk co-founded before acrimoniously departing) and other AI research labs, xAI launched with the explicit mission to create &#8220;truthful&#8221; AI that could rival frontier models. The company&#8217;s flagship product, Grok, was initially integrated into X (formerly Twitter). What began as a separate AI venture quickly evolved into something far more ambitious. After raising $6 billion in December 2024 at a valuation of approximately $45 billion, xAI acquired X Corp outright in March 2025 in an all-stock transaction that valued X at $33 billion whilst valuing xAI itself at $80 billion in the process of attaining a total valuation of $113 billion. The transaction gave Musk complete control over both the social media platform and the AI company, creating a vertically integrated AI and social network entity. The precarious nature of the company lies in the fact xAI&#8217;s competitive position is genuinely mixed. On technical benchmarks, Grok 3 achieved impressive results, including 93.3% accuracy on the 2025 American Invitational Mathematics Examination and becoming the first AI model to break the 1400 ELO score on the LMArena platform, demonstrating particular strength in mathematics and coding. Its integration with X also provided unique real-time information capabilities. However, significant challenges remained. With annualised revenues of approximately $500 million by mid-2025, xAI lagged far behind OpenAI&#8217;s $13 billion and Anthropic&#8217;s $7 billion. Enterprise adoption surveys showed strong interest but limited actual deployment compared to established competitors. Furthermore, the company&#8217;s reliance on massive computational scaling (using five to ten times more computing power than rivals) raised questions about business model sustainability. xAI was and is still facing considerable pressure to convert technical achievements into meaningful market share (albeit now under the SpaceX umbrella).</p><p>The core strategic justification for this merger centres on orbital data centres; deploying computing infrastructure in space rather than on Earth. This concept has transitioned from science fiction to theoretically plausible engineering, and the SpaceX-xAI combination uniquely positions the merged entity to pursue it. Orbital facilities offer abundant solar power without atmospheric attenuation (the reduction in intensity of electromagnetic radiation, like light and radio waves, or sound waves as they travel through the Earth&#8217;s atmosphere) or weather disruption, radiative cooling that eliminates the massive energy costs of terrestrial data centre cooling systems, the avoidance of land acquisition and planning permissions, and modular scalability through standardised deployment. Starship&#8217;s projected launch costs of $10-20 million per launch with 100+ tonne payload capacity fundamentally change the economics, potentially making orbital deployment cost-competitive with land construction in certain scenarios.</p><p>AI training workloads are nearly ideal for orbital computing: they require massive computational resources running continuously for weeks or months, they generate enormous electricity and cooling costs on land, and crucially, they tolerate the 50-150 millisecond communication latency inherent in satellite links far better than interactive applications. AI training consumes electricity on a staggering scale, often enough to power a small city for months. On Earth, data centres compete with residents and industries for grid capacity, driving up costs and environmental strain. In orbit, satellites can receive near-constant sunlight, unobstructed by atmosphere or the day/night cycle. Solar panels in space are up to eight times more productive than their land counterparts, theoretically providing a near limitless, carbon-neutral energy source for continuous training runs. Another advantage presents itself as traditional data centres spend massive amounts of electricity and freshwater on cooling systems to prevent GPUs from overheating. HVAC infrastructure often accounts for nearly as much power consumption as the computing hardware itself. Space offers an elegant alternative. In the vacuum of orbit, there&#8217;s no air for fans or chillers to work with. Instead, orbital servers use passive radiative cooling where large radiators beam waste heat directly into the void as infrared radiation. This eliminates water usage entirely and drastically reduces the energy overhead typically devoted to temperature control. Finally, another significant advantage is that AI training is latency-tolerant. Interactive applications like gaming, video calls, or high-frequency trading require near-instant feedback, typically under 20 milliseconds. A 150ms delay would render these services unusable. But AI training is fundamentally different. It&#8217;s an asynchronous, batch-processing workload that runs over weeks or months. While the initial data upload takes marginally longer via satellite link, the real work (the computational heavy lifting) happens onboard the satellite. Once the data reaches the orbital server, the training process is entirely unaffected by communication delays with the ground. If orbital data centres prove viable, xAI gains access to computing resources at potentially lower long-term costs than competitors negotiating with terrestrial cloud providers, whilst SpaceX creates a new revenue stream that could eventually rival Starlink.</p><p>However, substantial technical challenges should temper this optimism and cannot be dismissed. Radiation in space degrades commercial electronics rapidly; radiation-hardened components exist but cost orders of magnitude more and lag commercial performance by generations. An orbital data centre might use processors equivalent to technology from five to ten years ago, negating performance advantages. Alternatively, one can accept higher failure rates with commercial components, but replacing failed modules requires expensive retrieval missions or robotic repair systems that don&#8217;t yet exist at commercial scale. Thermal management, whilst theoretically simpler through radiative cooling, requires large surface-area radiators that add mass and deployment complexity; dissipating heat from thousands of densely packed GPUs in microgravity (the condition in which people or objects appear to be weightless) involves solving engineering problems land data centres never face. The regulatory framework for commercial orbital infrastructure remains underdeveloped, creating uncertainty about jurisdiction, environmental impact assessment, and international treaty obligations. Success for this synergy depends on proving that these challenges, and assumedly there will be many more, are indeed manageable at scale.</p><p>Beyond orbital computing, the claimed operational synergies appear modest. SpaceX uses autonomous technologies for spacecraft docking, booster landings, and satellite constellation management, and deep space missions could require advanced autonomy due to communication delays preventing real-time control. Integration with xAI could theoretically accelerate development of these capabilities, though the specific advantages over targeted partnerships or strategic hiring remain unclear, and keep in mind that SpaceX has successfully developed autonomous systems for two decades without owning an AI company. Data applications present similar ambiguity. Starlink generates network performance data and satellites produce Earth observation imagery, but most SpaceX data is highly specialised data with limited applicability to training general-purpose language models. The notion that Starlink network logs will somehow enable xAI to leapfrog OpenAI&#8217;s capabilities seems optimistic at best.</p><p>Thus, this merger also raises fundamental questions about business logic and resource allocation. Aerospace requires hardware engineering, manufacturing, and systems integration, whilst AI demands software development and machine learning research; disciplines requiring different talent, processes, and organisational cultures. SpaceX achieved dominance through relentless focus on reducing launch costs and adding AI development risks potentially diluting this focus without clear prioritisation frameworks for resolving competing demands. Resource allocation presents equally serious challenges: Starship development, Starlink expansion, orbital data centres, and xAI operations all require billions in capital simultaneously, forcing management to prioritise among divisions with vastly different risk profiles and timelines. This creates potential for internal conflict, compounded by compensation tensions as AI engineers command premium salaries that dwarf aerospace engineering pay scales, potentially disrupting SpaceX&#8217;s established culture and retention strategies.</p><p>One interpretation of the merger is that it may be better understood as IPO preparation rather than a new permanent corporate situation. SpaceX employees have long sought liquidity, but Musk has resisted public listing to maintain operational control and avoid quarterly earnings pressures. Meanwhile, xAI would face brutal scrutiny in a standalone IPO, with investors demanding answers about minimal revenue and its competitive position against entrenched rivals. A combined entity offers some advantages: the $1.25 trillion valuation establishes a high reference point for pricing negotiations and potentially less dilution to existing ownership, xAI&#8217;s valuation issues may be rolled up and disguised in the performance of SpaceX, the scale attracts large institutional investors with mandates requiring substantial market capitalisation, and the diversified growth story spanning proven aerospace revenue with speculative AI upside creates optionality in marketing the offering. However, there are potential downsides as well. Public investors typically discount complexity, and explaining the aerospace-AI integration thesis whilst projecting credible combined financials will prove difficult. More problematically, public company disclosure requirements will expose xAI&#8217;s minimal revenue and substantial losses in granular detail, potentially undermining the unified growth narrative if the AI division appears to be draining profits from successful aerospace operations. Investors will want to see Starship&#8217;s operational status beyond test flights, xAI showing meaningful improvement in capabilities and revenue traction, orbital computing moving from concept to proof-of-concept, and a credible path to overall profitability.</p><p>The merger has a few potential further interpretations as well. One view treats this as administrative efficiency, simplifying the management overhead of coordinating separate private companies, eliminating duplicated functions, and creating a unified path to eventual public listing. A more critical interpretation suggests defensive consolidation, with the merger primarily rescuing xAI from an untenable competitive position against currently better performing rivals; integration provides potential subsidisation through Starlink contracts and SpaceX computing resources, avoiding the embarrassment of outright failure whilst the $250 billion xAI valuation provides face-saving cover for all parties involved in previous funding rounds. The most optimistic interpretation invokes long-term vision: Mars colonisation requires sophisticated AI for autonomous operations in communication-delayed environments, and developing these capabilities now makes strategic sense even if short-term financial returns remain unclear. Finally, there is the simple fact that Elon is the driving force behind these companies and Tesla, and he may simply be trying to consolidate the entities to simplify his workflow and operations.</p><p>So as an overall quick assessment: the merger makes strategic sense for IPO preparation and simplification of operations. The current valuation is highly optimistic based on current fundamentals, and realistic outcomes depend on execution across Starship development, orbital computing deployment, and xAI improvement over the next 3-4 years. The technical opportunities are real but unproven and while some of the strategic logic is sound, the valuation is aggressive. As they say, ideas are cheap, execution is everything. Only time will tell if the ambitions of Musk will finally meet an insurmountable wall. So far, it hasn&#8217;t.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/note-004-spacex-xai-merger?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/note-004-spacex-xai-merger?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/note-004-spacex-xai-merger/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/note-004-spacex-xai-merger/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Back of the Napkin Valuation Tools]]></title><description><![CDATA[Simple is Usually Best]]></description><link>https://www.bonnefinresearch.com/p/back-of-the-napkin-valuation-tools</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/back-of-the-napkin-valuation-tools</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Thu, 29 Jan 2026 23:00:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/26682556-2e81-4808-9c51-b11d5d2b3700_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Investment, at its core, is an undertaking of valuation. The market assigns a price to every asset at any point in time, reflecting the consensus view of market participants. The source of superior market returns for an investor emerges when your personal valuation differs from the market&#8217;s, and you are more correct than the market in these instances of difference. Being different isn&#8217;t enough; being different and correct is the source of superior returns. But how does one actually value a company or asset?</p><p>Despite the multitude of valuation approaches, one concept stands as the rational foundation over time: the intrinsic value of a company or asset is the cash it can generate for shareholders over its lifetime, discounted to the present. This encompasses both present and future cash flows, with the discount applied to account for the time value of money (opportunity cost). This definition is rational for a fundamental reason: regardless of what markets and other investors might be willing to pay for an asset, if you hold it, you will experience ownership of the underlying cash flows and thus have ownership over tangible returns. Discounted cash flow (DCF) models attempt to capture this reality, but they can become unwieldy and overly complex. Sophisticated models promise precision, with countless variables feeding into elaborate spreadsheets that project decades into the future. Yet history has repeatedly demonstrated that such precision is often illusory. The human world is simply too complex, too dynamic, and too unpredictable for any model to capture perfectly. A single unexpected change, whether it be a regulatory shift, a technological disruption, a new competitor or a countless range of other factors, can render an entire model obsolete overnight. This reality suggests that investing is more art than science.</p><p>As Einstein reportedly once said, &#8220;everything should be made as simple as possible, but not simpler.&#8221; In a world where many pursue the mirage of perfect accuracy, there is a genuine advantage in being generally correct rather than precisely incorrect. Back of the napkin mathematics and checklists, rather than being unsophisticated, can actually serve as a powerful shortcut that distils core ideas and often proves more accurate than the complex precision of financial modelling. This is what this article is for, to briefly go through some of the simple quantitative and qualitative back of the napkin mental models and tools that while easy to do, provide invaluable help in analysing an investment.</p><h3>The Quantitative Valuation Toolkit</h3><h4>Price-to-Earnings Ratio (P/E)</h4><p>The P/E ratio is perhaps the most intuitive and common valuation metric. It is normally calculated as the price per share divided by the earnings per share (after preferred dividends). Commonly it is interpreted as for every dollar of earnings, how much are you paying for it? However, it can also be interpreted as how many years would it take to earn back your investment if current earnings were sustained?</p><p>A company trading at a P/E of 15 requires fifteen years of current earnings to earn back the money you have paid for it in accumulated earnings. Whether this represents good value depends on the company&#8217;s growth prospects, competitive position, and the returns available elsewhere. A mature, slow-growing business might deserve a P/E of 10, whilst a rapidly expanding company in a growing market might justify a P/E of 30 or higher. The key is understanding what you&#8217;re paying for. To give some context, the S&amp;P 500 P/E ratio historically is around 15-20, however in recent decades, it has normally sat around the 20-25 range.</p><p>Another mental trick I find useful at times, but not common to my knowledge is that the P/E ratio can be used as a rough yardstick for cumulative earnings relative to the purchase price: under a sustained earnings assumption, a P/E of 10 implies annual earnings equal to 10% of the purchase price, cumulative earnings equal to the purchase price after ten years, and cumulative earnings equal to 200% of the purchase price after twenty years.</p><h4>Price/Earnings-to-Growth Ratio (PEG)</h4><p>The PEG ratio refines the P/E metric by incorporating growth expectations, providing a more nuanced valuation framework as much of a company&#8217;s value comes from its future ability to generate cash. It is calculated by dividing the P/E ratio by the expected annual earnings growth rate. A PEG ratio of 1.0 is often considered fair value, suggesting you&#8217;re paying proportionally for the growth you&#8217;re receiving.</p><p>For example, a company trading at a P/E of 30 with expected earnings growth of 30% annually has a PEG of 1.0 (30 &#247; 30). This same company might appear expensive on a P/E basis alone, but when growth is factored in, the valuation appears more reasonable. Conversely, a company with a P/E of 15 but only 5% growth has a PEG of 3.0, suggesting overvaluation to the growth on offer.</p><p>The PEG ratio is particularly useful when comparing companies with different growth profiles. It helps answer the question: &#8220;Am I paying a reasonable price for this growth?&#8221; However, it comes with important caveats. The metric is only as good as the growth estimate used, and growth estimates are notoriously unreliable, especially beyond a few years. Additionally, PEG ratios don&#8217;t account for differences in business quality, for example, a company with a sustainable moat growing at 20% is usually worth more than a commodity business growing at the same rate in the long run.</p><p>Despite these limitations, the PEG ratio serves as a useful quick-reference tool for initial screening and for contextualising what might otherwise appear to be an expensive P/E multiple. It reminds us that growth has value and that a high P/E isn&#8217;t necessarily problematic if it&#8217;s accompanied by commensurately high growth. The key is understanding whether that growth is sustainable, profitable, and achievable.</p><h4>Earnings Yield (E/P)</h4><p>The inverse of the P/E ratio, the earnings yield expresses valuation as a percentage return. A company with a P/E of 20 has an earnings yield of 5%. This framing allows for direct comparison with alternative investments and provides an intuitive sense of what you&#8217;re actually receiving for your investment.</p><p>The earnings yield represents the annual earnings generated relative to what you&#8217;re paying for the asset. More precisely, it shows what percentage of your purchase price the company earns back each year. Think of it like a bond yield in concept: if you buy a bond yielding 5%, you receive 5% of your investment back in interest. Similarly, a 5% earnings yield means the company generates earnings equivalent to 5% of its market capitalisation annually, though the company may retain some or all of those earnings rather than paying them out as cash.</p><p>This metric becomes particularly powerful when comparing stocks to other investment opportunities. If ten-year government bonds yield 4% with virtually no risk, and a stock offers an earnings yield of 5%, you must ask whether that additional 1% adequately compensates you for the equity risk, lack of guaranteed payments, and business uncertainties involved. Conversely, if that same stock has strong competitive advantages, consistent earnings growth, and operates in a favourable industry, the 5% earnings yield might represent exceptional value compared to the risk-free rate.</p><p>The earnings yield also helps frame the opportunity cost of investment decisions. A company with a 10% earnings yield (P/E of 10) is returning earnings at double the rate of one with a 5% earnings yield (P/E of 20). Whether the latter justifies its lower yield depends entirely on its growth prospects, quality, and sustainability of earnings. High-quality compounders often trade at low earnings yields because investors recognise that today&#8217;s earnings will grow substantially over time, making the effective yield on today&#8217;s purchase price increasingly attractive in future years. Thus the earnings yield should be considered alongside the quality and sustainability of those earnings. A 15% earnings yield on a deteriorating business in a dying industry may be far less attractive than a 3% earnings yield on a high-quality compounder with decades of runway ahead. The yield tells you what you&#8217;re getting today; your analysis must determine what you&#8217;re likely to get tomorrow.</p><h4>Return on Invested Capital (ROIC)</h4><p>ROIC measures how efficiently a company generates profits from its total capital base, both debt and equity combined. It answers the question: for every dollar of capital deployed, how much profit does the business generate? The metric is usually calculated by dividing Net Operating Profit After Tax (NOPAT) by Invested Capital, where Invested Capital equals Total Assets minus Current Liabilities, or alternatively, Debt plus Equity. This calculation reveals how efficiently a company converts deployed capital into operating profits, and it is capital structure neutral, meaning it examines returns before considering how the business is financed.</p><p>High ROIC businesses tends to be a good sign as it indicates a business has found a way to generate substantial returns without needing to deploy vast amounts of capital. This allows for a far lower reinvestment rate of their income to achieve the same amount of growth compared to a lower ROIC business, and this tends to create far better outcomes for shareholders, especially if compounded over time. This often signals competitive advantages, whether through brand strength, network effects, or operational excellence. A company consistently generating 20%+ returns on invested capital is doing something right. Think of ROIC as the company&#8217;s return on every dollar invested in the business, regardless of whether that capital came from debt or equity. A ROIC above 15% generally indicates a strong business, whilst above 20% suggests meaningful competitive advantages. However, you must remember that for high ROIC to translate into real strong growth, the company must also possess the internal or external opportunity to reinvest its earnings at those same high rates going forward.</p><h4>Reinvestment Rate &#215; ROIC</h4><p>The fundamental relationship between earnings growth, reinvestment, and returns on capital is elegantly captured in a simple formula: Earnings Growth = Reinvestment Rate &#215; ROIC. Rearranged, this becomes Reinvestment Rate = Earnings Growth &#247; ROIC, which reveals why ROIC matters so profoundly. Consider two companies both targeting 6% earnings growth: Company A with 20% ROIC needs to reinvest only 30% of its profits (6% &#247; 20% = 30%), whilst Company B with 8% ROIC must reinvest 75% (6% &#247; 8% = 75%). Company A achieves identical growth whilst retaining 70% of earnings for dividends or buybacks, whereas Company B must plough back nearly all profits just to keep pace. This formula explains why high-return businesses can be so attractive to investors as they can grow rapidly, return cash generously, or do both simultaneously.</p><h4>Return on Equity (ROE)</h4><p>Similar to ROIC but focusing specifically on shareholders&#8217; equity, ROE measures the return generated on the owners&#8217; investment. The calculation is straightforward: Net Income divided by Shareholders&#8217; Equity. This shows the return generated specifically on shareholders&#8217; money&#8212;what equity investors actually earn on their stake in the business. ROE above 15% is generally considered strong, whilst 20%+ is exceptional. However, ROE can be misleading if a company is highly leveraged, as debt magnifies returns (and losses). It should therefore be considered alongside debt levels.</p><p>ROE&#8217;s critical weakness is that leverage amplifies it as a mediocre business with substantial debt can show impressive ROE simply through financial engineering. This is why you must examine ROE alongside debt levels.</p><h4>Debt-to-Equity Ratio</h4><p>This simple metric reveals how a company finances its operations and is calculated by dividing Total Debt by Total Shareholders&#8217; Equity. The ratio reveals the proportion of debt versus equity financing. Essentially, it is how much the company has borrowed relative to what shareholders have invested. A ratio of 1.0 means equal parts debt and equity; below 0.5 suggests conservative financing; above 2.0 indicates aggressive leverage. However, appropriate levels vary dramatically by industry as capital-intensive businesses like utilities may operate comfortably at 1.5-2.0, whilst software companies typically carry minimal debt.</p><p>High debt levels can amplify returns during good times but create existential risks during downturns. Understanding capital structure is essential to understanding risk. A company with minimal debt has far more flexibility and resilience than one loaded up on a mountain of borrowings. When analysing debt, also examine interest coverage (EBIT divided by interest expense) to ensure the company can comfortably service its borrowings even if profits decline.</p><h4>Price-to-Sales Combined with Gross Margins</h4><p>For high-growth businesses that are reinvesting heavily and may not yet be profitable, price-to-sales offers a useful alternative metric. Price-to-Sales can be calculated by dividing Market Capitalisation by Annual Revenue, showing how much investors pay for each dollar of revenue. However, it should be considered alongside gross margins, which are calculated as Revenue minus Cost of Goods Sold, divided by Revenue. Gross margin reveals what percentage of each sale remains after direct production costs. A company with 80% gross margins trading at 10&#215; sales is fundamentally different from one with 20% gross margins at the same multiple. The former usually, but not always, has more economic potential once it reaches maturity and can potentially harvest those margins.</p><p>This combination is particularly valuable for analysing reinvestment-stage businesses where current profitability is suppressed by deliberate growth investments. A SaaS company with 80% gross margins trading at 8&#215; sales may be considerably cheaper than a retailer with 30% gross margins at 2&#215; sales once you consider their respective profit potential.</p><h4>Free Cash Flow Yield</h4><p>The logic of free cash flow yield is largely the same as E/P ratio, but is useful because earnings can be manipulated through accounting choices, but cash is harder to fake. Free cash flow yield, which can be roughly calculated as free cash flow divided by market capitalisation, reveals what percentage return you&#8217;re receiving in actual cash generated by the business. Free Cash Flow itself is usually calculated as Operating Cash Flow minus Capital Expenditures, representing the cash available for shareholders and debtholders a business generates after maintaining and growing its asset base.</p><h4>Operating Leverage</h4><p>Operating leverage describes how profits respond to changes in revenue and can be calculated as the percentage change in operating profit divided by the percentage change in revenue. This measures how sensitive profits are to changes in revenue, it shows how much of each additional dollar of sales flows through to profit. Businesses with high fixed costs and low variable costs exhibit high operating leverage. Once they cover their fixed costs, additional revenue flows predominantly to profit. The inverse holds true as businesses with low fixed costs and high variable costs exhibit low operating leverage.</p><p>Understanding a company&#8217;s operating leverage helps predict how profitability will evolve as the business scales. A company with high operating leverage can see dramatic profit expansion from modest revenue growth, making it particularly attractive if you can identify inflection points in revenue trajectory. Software companies exemplify this: once the software is built (fixed cost), serving additional customers costs almost nothing (minimal variable cost). A SaaS business might see a 10% revenue increase translate to a 40% profit increase. This creates dramatic profit acceleration once revenue inflection occurs but works equally powerfully in reverse as revenue declines hit profits disproportionately hard. Traditionally, manufacturing and airlines show high operating leverage; consulting businesses show low leverage since costs (people) scale directly with revenue. When analysing companies, high operating leverage often increases both potential returns and risks, making it crucial to assess revenue and stability alongside the leverage itself.</p><h4>Owner Earnings (Buffett&#8217;s Measure)</h4><p>Warren Buffett refined the concept of free cash flow into what he calls &#8220;owner earnings,&#8221; calculated as Net Income plus Depreciation and Amortisation, minus Capital Expenditures required to maintain competitive position, minus any increases in Working Capital requirements. This metric attempts to answer the question: how much cash could the owners actually extract from the business annually without harming its competitive position? Owner earnings tries to provide a more realistic picture of economic reality than accounting earnings, which may include non-cash charges or exclude necessary reinvestment. It attempts to represent the true economic benefit flowing to owners.</p><h3>The Qualitative Valuation Toolkit</h3><h4>Moat Analysis: Hamilton Helmer&#8217;s 7 Powers</h4><p>Beyond the numbers, sustainable competitive advantages or &#8220;moats&#8221; play a large role in determining whether a company can maintain attractive economics over time. Hamilton Helmer&#8217;s &#8220;7 Powers&#8221; framework provides a structured approach to identifying genuine competitive advantages:</p><p><em>Scale Economies</em>: Cost advantages that arise from size. The larger the company, the lower the per-unit cost. This creates a self-reinforcing cycle where the biggest player can underprice competitors whilst maintaining superior margins.</p><p><em>Network Effects</em>: The value of the product or service increases with each additional user. Social networks, marketplaces, and payment platforms all benefit from network effects. Each new participant makes the network more valuable for all existing participants.</p><p><em>Counter-Positioning</em>: A new business model that incumbents cannot adopt without cannibalising their existing, profitable business. Netflix&#8217;s streaming model counter-positioned Blockbuster&#8217;s rental stores; the incumbent couldn&#8217;t respond effectively without destroying its own economics.</p><p><em>Switching Costs:</em> Customers face significant costs&#8212;financial, time-based, or psychological&#8212;to change providers. Enterprise software often exhibits high switching costs due to integration complexity and employee training requirements.</p><p><em>Branding</em>: Affective associations that increase willingness to pay. True brand power means customers will pay more for your product than for an identical generic alternative, purely because of the brand.</p><p><em>Cornered Resources</em>: Preferential access to valuable assets that competitors cannot easily replicate. This might include patents, exclusive contracts, unique mineral deposits, or irreplaceable talent.</p><p><em>Process Power</em>: Embedded company processes and organisational knowledge that cannot be easily replicated, even when visible to competitors. Toyota&#8217;s manufacturing excellence exemplifies process power.</p><p>A company possessing one or more of these powers may have a genuine moat. Without such advantages, attractive returns will eventually be competed away as you must assume, unless you have special advantages that allow you to be superior in your process and product, competent and well-backed competition will always emerge if there is an area of high margin profitability. The durability of excess returns depends entirely on the strength and sustainability of these competitive advantages.</p><h4>Psychological Analysis: Robert Cialdini&#8217;s 7 Principles of Influence</h4><p>Understanding the psychological forces at play both in consumer behaviour and investor behaviour often provides an additional dimension of insight. Robert Cialdini&#8217;s seven principles of influence reveal how human psychology impacts business success and market pricing.</p><p><em>Liking</em>: People prefer to say yes to those they like. We are more easily influenced by people who are similar to us, who compliment us, who are physically attractive, or with whom we cooperate towards mutual goals. This principle operates automatically and often unconsciously; we naturally favour those we find likeable, even when that likeability is irrelevant to the decision at hand. The feeling of affection or positive regard creates a bias towards compliance and favourable treatment.</p><p><em>Reciprocity:</em> People feel obligated to return favours. When someone gives us something, whether a gift, a concession, or an act of kindness, we experience psychological pressure to reciprocate. This obligation is deeply embedded in human nature and exists across all cultures. The reciprocity impulse is so strong that it can override our normal decision-making processes, causing us to say yes to requests we would otherwise refuse. Notably, we often feel compelled to return favours that are larger than what we originally received.</p><p><em>Scarcity</em>: People value things more when they perceive them as scarce or decreasing in availability. The possibility of losing something or missing an opportunity creates psychological tension and urgency. Scarcity works through two mechanisms: rare items are often more valuable, and scarcity signals that other people want the item (social proof). Importantly, the perception of potential loss is typically more motivating than the prospect of equivalent gain. Time limits, limited quantities, and exclusive access all trigger scarcity responses.</p><p><em>Authority</em>: People defer to and follow the guidance of credible experts and authority figures. From childhood, we&#8217;re conditioned to respect and obey legitimate authorities whether it be doctors, professors, law enforcement, credentials, titles, and even superficial symbols like uniforms or impressive settings. This deference to authority is often automatic and operates even when the authority&#8217;s expertise isn&#8217;t directly relevant to the situation. We use authority as a decision-making shortcut, assuming that experts know best and following their lead reduces our risk of error.</p><p><em>Social Proof</em>: People look to the actions and behaviours of others to determine their own, especially in situations of uncertainty. When we&#8217;re unsure what to do, we assume that others possess more knowledge about the situation and that their actions reflect the correct behaviour. This tendency is particularly strong when we observe people similar to ourselves, when we see large numbers of people acting the same way, or when we&#8217;re in ambiguous situations. Social proof operates automatically and we often don&#8217;t consciously realise we&#8217;re being influenced by what others are doing.</p><p><em>Commitment and Consistency</em>: People have a deep desire to be consistent with things they have previously said or done. Once we make a choice or take a stand, we encounter personal and interpersonal pressure to behave in ways that align with that commitment. This drive for consistency is so powerful that we&#8217;ll often persist with commitments even when they no longer serve our interests. The consistency principle is strongest when commitments are active (we took action), public (others know about it), effortful (we invested resources), and voluntary (we weren&#8217;t coerced). We modify our attitudes and beliefs to align with our past actions.</p><p><em>Unity</em>: People are influenced by appeals to shared identity and belonging. Unity is about being part of an &#8220;us&#8221;, sharing an identity with others. This goes beyond mere liking or familiarity; it&#8217;s about kinship, whether defined by family, geography, ethnicity, nationality, political affiliation, or shared beliefs and experiences. We&#8217;re more receptive to requests from those who are part of our in-group and less receptive to those outside it. The unity principle taps into our fundamental need for belonging and our tendency to favour those with whom we share identity markers. Creating a sense of &#8220;we&#8221; rather than &#8220;you and me&#8221; dramatically increases influence.</p><p>These principles operate on two levels for investors. First, they reveal hidden business strengths: companies effectively leveraging multiple psychological principles may possess stronger customer economics and more durable moats than financial statements alone suggest. Secondly, these same forces can distort prices in investment markets. Scarcity drives bubbles, social proof creates momentum cascades, authority figures move markets beyond fundamentals, and commitment bias keeps investors trapped in losing positions</p><h3>Conclusion</h3><p>None of these metrics tells the complete story in isolation. A low P/E might indicate value or a dying business. High ROIC could reflect a genuine competitive advantage or unsustainable conditions. Strong psychological influence might mask deteriorating fundamentals, or it might represent an underappreciated moat. The art lies in combining these tools, understanding their limitations, and developing a holistic picture of a company&#8217;s economics, competitive position, and the psychological forces at play. There is certainly room for DCF&#8217;s and complex valuations in this world, and sometimes it may truly be useful, however, many times, using back of the napkin tools can serve just as well, if not better in analysing and valuing.</p><p>Simple tools don&#8217;t just save time, they often enable deeper insight. When you can initially assess a business with a few key ratios, a competitive latticework, an understanding of psychological moats, or whatever other frameworks you have in your head, you can hold more ideas in your head simultaneously. It became easier to spot patterns, make comparisons, connect the dots and reach better conclusions. The ability to cut through complexity and focus on what truly matters is a rare and valuable skill. Yet again, as Einstein is once reported to have said, &#8220;everything should be made as simple as possible, but not simpler.&#8221;</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/back-of-the-napkin-valuation-tools?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/back-of-the-napkin-valuation-tools?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/back-of-the-napkin-valuation-tools/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/back-of-the-napkin-valuation-tools/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Mandate of Capital]]></title><description><![CDATA[Heavy Is the Crown]]></description><link>https://www.bonnefinresearch.com/p/the-mandate-of-capital</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/the-mandate-of-capital</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Sun, 25 Jan 2026 23:01:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8f46e841-5f11-42a2-b965-50f55a7c1752_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Mandate of Heaven is a political philosophy concept from ancient China used to justify the rule of the emperor and dynasties for thousands of years. The mandate is based on the belief that heaven (&#22825;) grants the right to rule to a just and virtuous leader, it is a form of divine endorsement that legitimises the ruler&#8217;s authority. The concept emphasises that the ruler must govern with virtue and benevolence as the ruler&#8217;s ability to maintain the mandate depends on their moral conduct and effectiveness in leading the country. Natural disasters, famines, and social unrest were interpreted as signs that heaven was displeased with the ruler and such events could indicate that the ruler had lost the mandate, leading to potential rebellion and regime change. If a ruler or dynasty lost the Mandate of Heaven, it could be transferred to a new ruler or dynasty. This justified the rise of new dynasties and the replacement of old ones, providing a moral and philosophical rationale for political change in the nation.</p><p>While the Mandate of Heaven is no longer as relevant as it once was with the end of Imperial China, there exists a natural Mandate of Capital in the world that shares similarities. The flow and story of capital is rather similar to what is outlined above. Possessing capital is one that entails power and responsibility as controlling the allocation of resources dictates how us humans and society live, the ability to retain this capital requires the continued stability of the general economic system one is in, as well as the continued support of individuals with their labours and consumption. Sustained misallocation of capital whether by an individual or larger entity always results in their demise over the long-term.</p><p>I must also define what I am referring to when it comes to the term capital. It is a word with varying definitions and connotations based upon the context of use, so for the purposes of this writing, I will define capital as any asset that is used in the production of goods and services, including physical, human, financial, and natural resources. In essence, capital is any asset that confers value or benefit to its owners. Furthermore, while it may seem obvious, I must clarify that capital is not capitalism. There seems to be a notion by some that allocating capital is exclusive to capitalists, but that would be an oversight. Capitalism is a form of economic ideology but other forms of economic and social organisation ideology, whether it be socialism, communism, anarchism or anything in between, directly or indirectly deal with how we allocate and use capital.</p><p>The truth is, throughout history and for the foreseeable future, the allocation of capital is at the forefront of every decision we as individuals and a collective make. No matter your beliefs in how wealth should be distributed, if wealth is the root of evil, how society should be organised, and so on, how capital will be allocated is a question we are constantly asking and attempting to answer. Even in saying that you reject all notions of material wealth and capital, you are stating a belief in how capital should be allocated and utilised. Capital dictates how we live our lives.</p><p>To receive the power and privileges of capital, one must first obtain it. Depending on the economic system one is a part of, the methods to do so will differ from a small to large degree. As a general rule of thumb, capital is obtained through the support of the individuals within a system. Whether it be through monetary consumption, direct trade, ideological support, human labour, or more, capital is obtained through the individuals of a system entrusting one with their stake of goods and services, whether it be through a transaction of some sort or a non-reciprocal transfer. For example, it could be the payment of money to one party in return for goods or service. It could be volunteer work and donations in support of a political campaign or movement. If there is enough collective support, it could be the willingness to surrender all private capital to state control in certain political and economic systems. The list goes on. What matters is that capital is obtained through capturing the imagination and commitment of others, whether it be through trading goods or services, offering a new political system, and so on, it is through the willingness of others to exchange their time, labour, resources, knowledge and products that one can obtain capital for themselves to dictate. Those who do not possess adequate capital face a precarious situation. On the most extreme end and most seriously, without adequate access to capital and resources, one will struggle to meet their basic survival needs. However, to a more moderate impact, the lack of adequate capital restricts the ability of individuals to shape the world as they please. They are limited in spending and thus allocating resources to the fullest degree in which they may desire to do so. Regardless of what economic system you preach, obtaining capital is vital for survival and the achievement of your goals.</p><p>The distribution of capital is vital, and those who possess must endure great responsibility. Whether through their everyday consumption patterns, to large entities making influential allocation decisions, how capital is spent will ultimately shape the world and provide the foundation of whether they will be able to maintain this power of capital allocation over the long term. Once capital is obtained, it is usually firstly used for personal use, to satisfy the individual desire to improve or maintain living standards in their private lives. The excess capital, which is often quite significant for those with the most power and influence in any capital system will face large questions in how to wield the surplus capital they have control over. The projects invested in; the people being paid capital and how resources are generally being distributed is a fairly reliable indicator of what the future world will look like. Poor capital allocation will lead to a loss of capital. Extravagant and wasteful spending of capital tends to breed instability in economic systems, one only has to look at past kings with ill-advised allocation of resources and the instability created when the subjects of the king suffered in living standards due to poor capital allocation decisions. The poor allocation in these scenarios often led to the downfall of the King and often the system they had built around them and would bring about new individuals into the positions of power and the system they would build around them. In every system and era imaginable, how capital is utilised plays a strong factor in determining the success and failure on an institutional and individual level. How we allocate our currency is ultimately how we allocate our resources.</p><p>The point being this, regardless of your political beliefs and the system you live in, it is through the control of capital that we are most able to influence and shape the material world to how we want it to be. Hence, much like the Mandate of Heaven, there is a natural Mandate of Capital. Whoever can obtain and allocate capital shall be far more able to shape the material world as they see fit. Those who are bestowed with the responsibility of capital should allocate responsibly, for the misuse of capital will lead to the loss of capital for those individuals, whether it be through the allocation power of individuals in free market systems, or the collective power of the state in controlled economies, or so on. Without the will and support of the people, one may maintain their capital foothold in the short term, but in the long term, their capital foothold will collapse. Without the people&#8217;s support and labour, no economic system and no capital allocators within an economic system can survive. The Mandate of Capital is simple: In the long term, those who create value relative to what the people want will have the power of allocation, and they may maintain their position of strength with wise and responsible allocation of resources, but if they fail in this role and lose the support of the people, the capital system and allocators will change as the people see fit. This mandate applies to the average person working a common job to the selected few with the privilege of wealth, it simply manifests to varying degrees.</p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-mandate-of-capital?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-mandate-of-capital?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/the-mandate-of-capital/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/the-mandate-of-capital/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Intel’s Disappointing Earnings]]></title><description><![CDATA[A 17% Drop on a Poor Earnings Report]]></description><link>https://www.bonnefinresearch.com/p/notes-003-intels-disappointing-earnings</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/notes-003-intels-disappointing-earnings</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Sat, 24 Jan 2026 23:00:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/63df854f-2e5b-4af2-8cd3-e3ebb89a01e0_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Intel has been on quite the run over the past year and a bit. I remember when I first entered my position in August 2024, it was beat down and at the lowest it had been in years (I have since then exited my position as of the time writing this article). My investment thesis at the time was based on a straightforward value proposition: the stock traded at a substantial discount to both its balance sheet and its earnings potential, especially given the company&#8217;s strategic importance to Western semiconductor manufacturing and management&#8217;s transparent efforts to address the challenges undermining its former reputation. In my opinion, the price point at the time was a cheap valuation for a company with such extensive manufacturing assets, intellectual property, market integration and strategic importance as Intel&#8217;s market capitalisation sat below $100 billion. Since then, while progress has been made in the operations of the business, the stock has truly been on an extraordinary run as government investment and support paired with semiconductor industry positive sentiment has pushed the stock to over $54 per share by the close of trading on 22 January 2026, representing a staggering gain of roughly 150% in just seventeen months and a market capitalisation of $259 billion.</p><p>Then after the market closed on the 22<sup>nd</sup> of January, the company's shares plunged as much as 13% in after-hours trading due to its quarterly earnings report, and finished trading down 17% after the first full day of trading, erasing billions in market value and interrupting recent momentum. The fourth quarter results were solid and slightly better than expected by analysts, but future guidance by management disappointed. Intel reported fourth-quarter revenue of $13.7 billion and adjusted earnings per share of 15 cents, beating Wall Street expectations of $13.4 billion in revenue and 8 cents per share. However, guidance for the first quarter of 2026 told a troubling story. Intel projected first-quarter revenue between $11.7 billion and $12.7 billion with breakeven adjusted earnings per share, below analyst expectations. One of the core reasons for that, as CFO David Zinsner explained, is that Intel&#8217;s buffer inventory has been depleted and a wafer mix shift from client CPUs (desktop/laptop processors) to server CPUs (data centre chips) in Q3 won&#8217;t come out of fabrication until late Q1 2026. This creates a timing gap where they have no buffer stock and new production hasn&#8217;t emerged yet. In other words, Intel mispredicted customer demand. Intel incorrectly forecasted that hyperscaler customers would increase core count per server and not total server units but instead, data centre units rapidly and significantly increased in Q3-Q4 catching Intel completely off guard. Because Intel forecasted wrong, they allocated too much fab capacity to client chips (laptops/desktops) and not enough to servers. When they realised the mistake in Q3, they began shifting wafer production from client to server CPUs, but these server-focused wafers won&#8217;t exit fabrication until late Q1 2026. Meanwhile, Intel&#8217;s buffer inventory was depleted after drawing down stockpiles to meet the unexpected demand surge in H2 2025. Furthermore, while the company has depleted inventory of high-demand server chips that customers want, their production misallocation means they are potentially carrying excess inventory in slower-moving client segments or legacy products that are harder to sell. This creates an undesirable situation of too little of what the market wants and potentially too much of what it doesn&#8217;t.</p><p>While the supply constraints should be a theoretically small and solvable problem (management expects supply to improve beginning in Q2 2026), and if this were truly an isolated inventory mismatch, it would resolve in a few quarters and the business would move on. However, clearly after the recent run-up in stock price, this sudden drop demonstrates investor concern that this execution mistake, combined with years of declining performance across market share and technological abilities, may indicate unchanged deeper organisational dysfunction and an inability to capitalise on market opportunities. The fear is not about this quarter&#8217;s numbers, but whether Intel&#8217;s pattern of declining performance will continue indefinitely as competitors further capture the AI and compute market.</p><p>Intel has laid out genuinely ambitious plans for its future and has begun investing heavily in regaining both market share and technological leadership. The company has undergone significant organisational restructuring, centralising its data centre and AI businesses, streamlining operations, and recruiting new leadership talent. Intel is now shipping products on its most advanced manufacturing process, Intel 18A, making it the only manufacturer producing such sophisticated transistor technology on U.S. soil. The future roadmap includes advancing to Intel 14A with volume production targeted for 2028. On paper, Intel possesses significant advantages: extensive manufacturing capacity, deep customer relationships, formidable engineering talent, and strategic importance for Western semiconductor independence.</p><p>Yet earnings reports like this one raise the fundamental question: can Intel actually execute? The demand forecasting failure, combined with manufacturing yields that remain below industry-leading standards, falling market share in many bread and butter markets, and new products that are still dilutive to corporate gross margins, suggests that whilst progress is being made, execution still falls short in results so far. A turnaround like Intel&#8217;s was never going to be quick or easy. The semiconductor industry is inherently capital intensive, with product cycles measured in years and enormous upfront investment required for new process nodes and fabs. As a result, the true quality of decision-making and execution can only be assessed over multiple product cycles and capital deployments, meaning investors will need patience to see whether strategic initiatives and execution translate into sustainable market leadership and financial performance. In other words, despite the recent upturn and the most recent bump in the road, the truth is, no one can say for sure just how Intel will turn out in the long run just yet. The positive thesis will probably focus on something like their existing resources and clear commitment to regaining technological leadership through disciplined but ambitious investment, while the negative thesis will probably centre around something like competitors already taking market share and having momentum, combined with the deep-rooted cultural bureaucracy and mismanagement of Intel.</p><p>Intel is a case of substantial valuation uncertainty. The company&#8217;s unstable outlook makes valuing the company through any means difficult to apply with confidence. Outcomes could range from a successful turnaround justifying much higher valuations to continued competitive decline making current prices generous. For the diligent investor, Intel demands prudence and rigorous ongoing analysis. This is a bet on operational turnaround in one of the world&#8217;s most competitive industries. The resources, talent, and strategic positioning are present, but translating these advantages into consistent execution excellence remains unproven. The coming quarters will be critical in determining whether Intel&#8217;s ambitious plans can truly move beyond quotes and words and into operational reality. Until the company demonstrates reliable delivery across multiple product cycles, Intel and its stakeholders will truly be in for quite the ride.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-003-intels-disappointing-earnings?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-003-intels-disappointing-earnings?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-003-intels-disappointing-earnings/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-003-intels-disappointing-earnings/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Great Memory Shortage]]></title><description><![CDATA[The Consequences of the AI Buildout Continue]]></description><link>https://www.bonnefinresearch.com/p/notes-002-the-great-memory-shortage</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/notes-002-the-great-memory-shortage</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Wed, 21 Jan 2026 23:00:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/17a634ac-3697-48b9-8ee8-f3192b81a7bd_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You know something unusual is happening when a company like SanDisk, traditionally viewed by the general public as a stable provider of mature consumer storage products, appreciates more than 1000% in the span of a year with its stock price rising from $38.50 in June 2025 all the way to $453.12 as of the 20<sup>th</sup> of January 2026. This is not a business traditionally associated with explosive growth, yet its stock has done exactly that. Perhaps most importantly, much of this increase in stock price seems not to have come from only retail investor euphoria or speculation, although it is surely a factor, but rather a big factor is the institutional reassessment of the fundamentals of the business driven by revenue, profit and margin expansion. Gross margins expanded from 22% to over 40% within twelve months, whilst revenue is forecast to surge 42% in fiscal 2026 to $10.45 billion and earnings to skyrocket 350% to $13.46 per share. What&#8217;s even more amazing is that some analysts are predicting gross margins to hit around 65% in 2027 and maintain that point for a while. While forward projections are always subject to the chance of being incorrect, especially in dynamic and competitive fields such as AI related hardware, what is undeniable is that it has already been a crazy time for SanDisk and many other companies in the computer memory industry so far. What&#8217;s more is that many think there is much more craziness to come, especially as the flow-on effects of this potential memory shortage hit the consumer markets and becomes more tangible to the masses.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aACc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aACc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 424w, https://substackcdn.com/image/fetch/$s_!aACc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 848w, https://substackcdn.com/image/fetch/$s_!aACc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 1272w, https://substackcdn.com/image/fetch/$s_!aACc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aACc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png" width="939" height="504" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:504,&quot;width&quot;:939,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:86217,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.bonnefinresearch.com/i/185300317?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aACc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 424w, https://substackcdn.com/image/fetch/$s_!aACc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 848w, https://substackcdn.com/image/fetch/$s_!aACc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 1272w, https://substackcdn.com/image/fetch/$s_!aACc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96f92f3f-37c6-4ff5-93b1-6e8bcb82f04e_939x504.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So how did a mature and seemingly boring stock suddenly become a $66 billion company seemingly in the blink of an eye?</p><p>While there are many factors at play, including SanDisk moving up the stack and entering enterprise and high capacity/high endurance NAND, the most important factor is the overall market&#8217;s excessive demand for memory chips and the mismatch of supply leading to companies allocating larger proportions of existing capacity and resources towards the more profitable and higher paying AI customers. When concerns first emerged about the AI infrastructure buildout, much of the conversation centred on energy consumption. Data centres were projected to strain power grids, potentially driving electricity prices skyward. Yet as the AI industry develops, a different bottleneck is becoming painfully clear: memory chips.</p><p>The reason for this explosion in demand for memory products is that modern AI LLMs and computer systems require large amounts of data to be moved rapidly between processors and storage. Unlike traditional computing applications, where processing power often represents the primary constraint, many AI workloads are fundamentally restricted by memory these days. Training a frontier model involves moving terabytes of parameters across countless iterations, whilst inference demands rapid access to billions of weights simultaneously. On top of that, bandwidth is equally critical. A processor can only work as fast as it can access the data it needs and as a result, High Bandwidth Memory (HBM) has emerged as the gold standard for AI applications because it can deliver vastly more data per second than conventional memory technologies. The market for advanced memory chips is remarkably concentrated amongst a handful of manufacturers with the main three being Samsung, SK Hynix and Micron. They traditionally supplied multiple lines of memory chips from high-end to lower-end, albeit to varying degrees, but as the demand for HBM and other high-end memory products from AI companies has brought more revenue and higher margins (profit margins are roughly 60&#8211;70%, compared to much thinner margins for consumer RAM), they are focusing their factories, engineers and resources into producing high ticket products and neglecting lower-end and consumer products. SK Hynix has reportedly shifted almost all their advanced cleanroom space to HBM and server-grade DDR5 and while they haven&#8217;t officially quit the consumer market, their supply for standard PC RAM has plummeted, contributing to the massive price spikes in the retail market. Micron has taken the most drastic step. In late 2025/early 2026, they made the high-profile decision to exit several consumer chip lines (including their famous Crucial brand for certain retail segments) to focus almost entirely on the enterprise Since Samsung makes final products (phones, tablets, PCs) they are the only player still trying to maintain multiple lines, producing higher and lower end memory products, but even they are struggling to keep up. They have had to divert massive amounts of production capacity to keep pace in the HBM markets, which has tightened the supply of their consumer SSDs and even though they are committed to the production of these lower-end products, they recently rejected long-term contracts for standard DRAM, choosing instead to raise prices by up to 70% to match the market&#8217;s scarcity. As these manufacturers pivot aggressively towards high-margin AI memory products, a predictable consequence is emerging, shortages in the commodity memory that powers everyday devices.</p><p>Furthermore, traditionally, RAM serves as a computer&#8217;s high-speed workspace while NAND flash provide permanent storage. However, AI&#8217;s massive data requirements have blurred this distinction as NAND is now being re-engineered to function more like active memory, since keeping everything in expensive RAM would be economically unfeasible. This has triggered a supply crisis since manufacturers are reallocating production capacity towards high-margin, AI data centre memory products (enterprise DRAM, HBM, and performance NAND), leaving consumer markets with tighter supply. The result is rising prices across the board because AI infrastructure is absorbing the manufacturing capacity that would otherwise serve everyday PC builders and upgraders. In other words, all the memory makers are allocating more to AI data centre products, and the storage makers are also allocating more to AI data centre products due to the blurring of memory and storage, and what you are left with is a situation where there is vast demand and supply can barely keep up for the AI data centres, while there is less supply available right now for non-AI customers as they tend to pay less. Furthermore, unless inference efficiency increases in multitudes relative to how much energy the AI models consume, there is still a massive projected build out of AI data centres to occur in the coming years, and if that is the case, the current projections have memory chips being one of the main bottlenecks in the expansion. To give some context, some research estimate that the industry is attempting to add nearly 100GW of new capacity between 2026 and 2030 to meet the explosion in generative AI demand, while a Macquarie research report has projected the current supply capacity of the three major memory makers (Samsung, SK Hynix, and Micron) is only adequate to meet the build-out of about 15GW of AI data centres over the next two years. Thus, even with new capital investment into new manufacturing capacity for storage and memory products which will take years to be functional, much of it may still be reserved for AI customers and have relatively limited supply for other customers without paying much higher prices. Industry analysts project DRAM prices could surge by 30-50% over the coming year, with NAND flash following a similar trajectory. On a side note, Nvidia, TPUs and other chipmaking start-ups are already hitting multitudes increases in efficiency for inference in new generations but this increase has to be relatively more than what models consume for there to be a reduction in net energy required and thus a lower build out demand for these chips.</p><p>So, is everyone going to be feeling constant the effects on their budget buying technological products over the next few years? The answer is probably yes, but not all buyers will be affected equally. Companies with sufficient scale and foresight are moving aggressively to lock in supply at fixed prices. Apple, with its massive purchasing power and sophisticated supply chain management, has reportedly secured long-term agreements for significant portions of its NAND requirements. Their ability to commit to enormous volumes years in advance gives them leverage that smaller manufacturers can&#8217;t replicate. Yet even Apple cannot insulate itself entirely. Whilst they may have hedged their NAND exposure to a degree, reports suggest portions of their DRAM requirements remain uncontracted and exposed to spot market pricing.</p><p>For consumers products, memory chips represent a substantial portion of the bill of materials for virtually every electronic device. A high-end smartphone can incorporate between US$100 and US$150 worth of memory and storage components. Laptops and professional workstations can exceed this, often incorporating US$200 to US$350 in memory-related costs alone. That $1000 smartphone may become a $1100 smartphone. The $1500 laptop may stretch towards $1600 laptop. Gaming consoles, tablets, smart televisions, automotive electronics all of them will probably feel the pull of rising memory costs. The cumulative impact across a household&#8217;s technology purchases could easily reach hundreds to thousands of dollars annually.</p><p>Standard economics suggests that such large profit margins should attract new supply and memory manufacturing is indeed cyclical, with periods of shortage driving investment that eventually creates gluts, which then lead to underinvestment and subsequent shortages. The current dynamics look like it fits this pattern as manufacturers are responding with capacity expansion plans. SK Hynix has announced multibillion-dollar investments in HBM production facilities. Micron is breaking ground on new fabrication plants in the United States, subsidised partially by government semiconductor initiatives. Samsung continues to expand its memory capacity. And many more other players are entering or expanding existing operations. These investments should, in theory, eventually rebalance supply and demand. However, it is not as simple as it seems, and the amount of risk involved in expanding capacity may bring about more cautious and slow approach from these companies. Building a modern memory fabrication facility requires three to four years from ground-breaking to volume production. The capital requirements are staggering with a single advanced megafab now costing between $15 billion and $20 billion, and some next-generation facilities projected to exceed $30 billion or even $100 billion for multi-fab complexes. These extreme prices create understandable hesitation and fear of overbuilding. A manufacturer committing to such investments today must predict the market landscape of 2028 or 2029, a timeframe that is near impossible to confidently forecast in the rapidly evolving world of AI technology. Memory chipmakers are wary of repeating previous cycles where aggressive overbuilding led to a global oversupply and a subsequent collapse in prices. This potential disciplined approach may lead to a slower rebalancing of supply meeting demand. Furthermore, legacy or lower-margin products, such as the standard memory found in consumer electronics and home appliances, may fail to receive adequate investment or manufacturing attention due to previously mentioned factors, leaving those products vulnerable to sustained shortages and price instability. However, the current shortage will likely not prove catastrophic or permanent. Market mechanisms do work, albeit with lags. The extraordinary margins available in memory production will attract capital, encourage capacity expansion, and eventually rebalance supply. New entrants may emerge, though the technical and capital barriers to memory manufacturing are formidable. Existing manufacturers will, despite their caution, ultimately expand capacity because the profit opportunity is too substantial to ignore entirely. But the time lags will potentially be painful and draw uproar.</p><p>Ultimately, what we&#8217;re seeing is the price of the AI buildout in a world of limited resources. Silicon fabrication capacity, cleanroom facilities, advanced lithography equipment, skilled engineers all exist in finite quantities. When markets decides to pursue a particular technological direction with intensity, the resource are allocated in that direction. Markets, both free and interventionist, have rendered their verdict for now: AI development is sufficiently important to warrant enormous resource allocation. Through a combination of venture capital enthusiasm, corporate strategic investments, and government industrial policy, hundreds of billions of dollars are being directed towards AI infrastructure. This capital represents a claim on real resources that make cutting-edge semiconductors possible.</p><p>When AI infrastructure makes a larger claim on these resources, other claimants necessarily receive less relative to the overall pie. The consumer buying a laptop, the small manufacturer producing IoT devices, the automotive company building electric vehicles all find themselves competing for resources against deep-pocketed hyperscalers and firms building AI data centres. We can and, in all likelihood, will increase overall capacity and expand the pie allowing for increased production and supply across the board, but it will take time, and those who have less purchasing power will still receive relatively less than those with more. </p><p>Predicting the future economic landscape of the memory market is a difficult task due to the volatile nature of technological progression and shifting market requirements. While current trends suggest a sustained demand for HBM and expanded data centre infrastructure will continue to push tight supply conditions for memory and storage chips, these trajectories are subject to sudden shifts in end-user demand or a potential cooling of AI investment. Technological shifts also present significant variables. The further development and adoption of new types of specialised inference chips or more efficient hardware architectures could fundamentally reduce the physical footprint and memory capacity required to sustain complex computations. Furthermore, advancements in software efficiency and algorithmic optimisation may allow for sophisticated processing with significantly lower hardware overhead than is currently anticipated. Consequently, while current trends point to the great memory shortage, the inherent unpredictability of innovation means that existing paradigms should be viewed as subject to change at any moment.</p><p>Whether you agree with this buildout or not, and whether these investments bring about the benefits and value to society as envisioned are important and genuine questions that need to be answered. However, for now, regardless of your position, it is clear that there is already a great memory shortage on our hands and one that may only exacerbate.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-002-the-great-memory-shortage?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-002-the-great-memory-shortage?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-002-the-great-memory-shortage/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-002-the-great-memory-shortage/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[DeepSeek Engram]]></title><description><![CDATA[Brace Yourself for V4]]></description><link>https://www.bonnefinresearch.com/p/notes-001-deepseek-engram</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/notes-001-deepseek-engram</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Mon, 19 Jan 2026 00:00:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5ec6a028-7e4e-4476-8d51-356699047550_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DeepSeek shook the AI world and financial markets at the start of 2025, and it now seems to be dropping hints that it is looking to do the same again this year in 2026. The Chinese company has released two papers already this year, with one of them describing their new Engram architecture. We should be seeing the effectiveness or ineffectiveness and impact of this new technique soon as they are slated to launch their new DeepSeek-V4 model in mid-February, and if last year has shown us anything, it is that DeepSeek is perhaps the most influential AI lab outside of the US at the moment.</p><p>Let us first establish some context surrounding this new model and why it could potentially represent something important, or perhaps more of the same old. In December 2024 and January 2025, DeepSeek, which was relatively unknown at the time, released its DeepSeek-V3 model and DeepSeek-R1 model. While the consensus is that they did not surpass any frontier models from the Western AI labs, they did come remarkably close and it was a real moment of realisation that the Chinese AI labs were catching up and chasing hard. What was more impressive though was the claims by DeepSeek that their models were trained on a fraction of the computing resources and chips that the Western AI models had been trained on so far. This had massive implications for the market as firstly it caused many to question whether the AI infrastructure buildout and massive capex being spent on data centres was truly necessary. It was a large factor, combined with some others, in NVIDIA suffering a temporary $600 billion market cap decline. Furthermore, this DeepSeek moment was especially significant because it challenged one of the West&#8217;s primary competitive advantages: access to superior chips like NVIDIA&#8217;s and a chipmaking ecosystem anchored by companies like TSMC and ASML&#8212;hardware now restricted from export to China.</p><p>It provided the basis for hope and fear depending on your perspective, that architectural gains could be enough to create the best models and AI products in the present and future even with computing limitations. DeepSeek claimed it was able to create such a competitive model despite utilising multitudes less compute power due to architectural innovation in its algorithms such as utilising mixture of experts. While people argue about the true precise amount of compute used by DeepSeek in training V3, no one argues that it was definitely significantly more efficient in the training process than other models at the time and represented a true contribution to the architectural designs of models in the AI field.</p><p>However, the initial panic subsided relatively quickly after the release of V3. Within weeks, the industry had processed the implications and reached a more nuanced conclusion: DeepSeek&#8217;s new models hadn&#8217;t invalidated the importance of compute but rather it had demonstrated that compute could be used more efficiently. The innovations in DeepSeek&#8217;s architecture, particularly its sophisticated mixture-of-experts design, didn&#8217;t eliminate the need for scaling; they made scaling more productive and that applying the novel and superior architecture insights you could get even better models when combined with increasingly larger computational capabilities. In other words, companies realised that you still wanted to build out the compute power as much as you can, because this combined with superior architecture would yield even greater results than before. So while DeepSeek announced itself as an important player in the AI ecosystem, it ended up being yet another bookmark in the story of AI as most of the Western and Chinese AI labs went about business as usual for the rest of the year, pushing forward with expanding compute power and innovating architectural design combined with their data to keep iterating their models. The allure of DeepSeek is accentuated by the background of its founding story. Rather than emerging from Silicon Valley&#8217;s venture capital machine or a tech giant like Google, it comes from a quantitative hedge fund in China, with a reportedly genius founder essentially going out on his own, tinkering and creating and inventing. The narrative goes that Liang Wenfeng couldn&#8217;t effectively communicate his AI ideas or vision to the investors in his hedge fund and they didn&#8217;t understand or weren&#8217;t interested in this tangent from the core quantitative trading business. But because he had control over High Flyer and its resources, he didn&#8217;t need their permission or buy-in; he simply went out and did it anyway, funding the research himself through the profits from the hedge fund. There&#8217;s a romantic quality to this narrative that stands in stark contrast to the typical AI company today that goes through the standard practices of raising large amounts of venture capital, having a wealth of resources at its disposal and being entrenched in the mainstream technology circles of Silicon Valley. Instead, we get the image of Liang Wenfeng pursuing what feels like an intellectually-driven project, an underdog utilising armed with only his wit to compete against the Goliaths of his day. Although this kind of romantic story can be overstated as the project does stem from a multi-billion dollar quant fund, the increased efficiency and natural limitations faced by DeepSeek certainly paints this image of scrappy innovation versus industrial machinery.</p><p>Now back to DeepSeek Engram. So as we have mentioned, a year or so has passed and the markets and AI world has more or less gone back to the same as before, with a large focus on architecture and increasing compute power to improve the quality of models. This is especially beneficial for the US as their main advantage has been in the superiority of their chips and hardware ecosystem and it is projected to remain this way for the foreseeable future with Chinese companies struggling to catch up in the chipmaking department. But DeepSeek has released new papers in 2026, promising new architectures that will yet again bring about a leap in efficiency by fundamentally rethinking how AI models handle different types of knowledge. I think it&#8217;s pretty clear that DeepSeek is expecting their new model to have quite the effect based upon the releases of these papers and signs of confidence teeming throughout, and even some online reports have leaked that its coding ability is expected to potentially surpass ChatGPT&#8217;s. In fact, in the Engram paper, they have placed a telling reference within the paper explaining its architecture. It&#8217;s a phrase &#8220;Only Alexander the Great could tame the horse Bucephalus.&#8221; The tale goes that King Philip II of Macedon was presented with a magnificent but wild horse named Bucephalus, whose name meant &#8220;ox-headed&#8221; due to the distinctive shape of his brow. The animal was extraordinarily powerful but completely unmanageable and none of the experienced horseman could ride him, so he was deemed worthless. Philip ordered the horse taken away, but his young son Alexander begged for one chance to tame the beast. What happened next became legend. While the grown men had attempted to dominate Bucephalus through strength and traditional horsemanship, the young Alexander had been observing. He noticed something everyone else had missed: Bucephalus was frightened by his own shadow moving on the ground. Every time the horse saw this dark shape shifting beneath him, he would rear and bolt. Alexander&#8217;s solution was elegant in its simplicity. He turned Bucephalus toward the sun, eliminating the shadow from the horse&#8217;s field of vision. Then he ran alongside the animal until it was calm, before finally mounting and riding him successfully. Bucephalus became Alexander&#8217;s legendary warhorse, carrying him across thousands of miles of conquest from Greece to India, until the horse&#8217;s death in battle around 326 BCE. Alexander founded a city, Bucephala, in his honour. The lesson wasn&#8217;t about raw power but about ingenuity, understanding and doing what others are unable to see. DeepSeek seems to be positioning itself as the Alexander the Great of AI: the lab that will tame the AI beast not through overwhelming resources but by observing what others have missed.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TyOX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TyOX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 424w, https://substackcdn.com/image/fetch/$s_!TyOX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 848w, https://substackcdn.com/image/fetch/$s_!TyOX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 1272w, https://substackcdn.com/image/fetch/$s_!TyOX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TyOX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png" width="607" height="368" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:368,&quot;width&quot;:607,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:56491,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.bonnefinresearch.com/i/184946533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TyOX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 424w, https://substackcdn.com/image/fetch/$s_!TyOX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 848w, https://substackcdn.com/image/fetch/$s_!TyOX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 1272w, https://substackcdn.com/image/fetch/$s_!TyOX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff159bd76-4d91-461b-8d54-4e1fd7af400e_607x368.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So what actually is Engram? Traditional large language models treat every token generation as equally expensive, running complex calculations through billions of parameters whether they&#8217;re deriving a novel mathematical proof or reciting &#8220;To be or not to be.&#8221; This is computationally wasteful and is the equivalent of using the same brute force approach regardless of whether you&#8217;re facing a genuinely difficult challenge or a routine task. To put it as simply as possible, Engram proposes a solution: maintain a massive external memory structure to handle the routine stuff, freeing the core neural network to focus on actual reasoning tasks. When an AI model recalls that Paris is the capital of France, it doesn&#8217;t need to activate billions of parameters and perform complex matrix multiplications. It just requires factual retrieval, not reasoning. Engram offloads these memorisation tasks to a separate lookup table, preserving the neural network&#8217;s capacity for tasks that genuinely require &#8220;thinking.&#8221; This isn&#8217;t entirely unprecedented. The concept echoes earlier work in neural-symbolic AI, retrieval-augmented generation (RAG), and even classic computer science data structures. What makes Engram potentially significant is the scale and integration&#8212;turning what has traditionally been more of a small component of the overall solution into a fundamental architectural principle. Other labs have explored similar territory but DeepSeek appears to be making the most aggressive bet on this split-architecture approach. This structure allows the model to decouple memory from reasoning, theoretically increasing efficiency by ensuring the heavy computational &#8220;thinking&#8221; power is reserved for novel or complex problems.</p><p>However, there is still the fundamental tension in DeepSeek&#8217;s story. Even if Engram works brilliantly, even if V4 demonstrates remarkable capabilities at a fraction of the computational cost, the likely outcome isn&#8217;t a sustained competitive advantage. It&#8217;s that Western labs will study the architecture, implement their own versions, and combine those efficiency gains with their substantially larger compute budgets to regain the lead or pull further ahead. Until there comes a point in time when continued compute scale for whatever reason does not improve AI models, many of these architectural innovations will not lead to sustained competitive advantage. If DeepSeek V4 is indeed yet another substantial moment in the efficiency and quality of AI models, the question will be the same yet again, is this merely another bookmark in the road of AI, and other firms will adopt the architectural techniques if they are useful, and perpetuate the rapid expansion and improvement of compute power, thus continuing the status quo, or could there be a true and lasting paradigm shift in how companies view the best way to advance AI research and progress. Only time will tell what the future holds, but I do believe this will be an interesting moment in the AI world this year, particularly if the increased efficiency in architectural design leads not only to more catch-up but genuine superiority of a Chinese AI model for the first time. It would serve well to remember that one of the most important and credible AI researchers in the world right now, Ilya Sutskever, has been on record saying that under the current paradigm, he believes that while scaling compute will bring about incremental improvements, an age of research is needed to bring about the next wave of substantial improvements that can take AI to the technological state that so many of its proponents believe possible: as some form of supernatural entity or technology.</p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-001-deepseek-engram?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-001-deepseek-engram?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/notes-001-deepseek-engram/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/notes-001-deepseek-engram/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[A Primer on Value Investing]]></title><description><![CDATA[All Sound Investing is Value Investing]]></description><link>https://www.bonnefinresearch.com/p/a-primer-on-value-investing</link><guid isPermaLink="false">https://www.bonnefinresearch.com/p/a-primer-on-value-investing</guid><dc:creator><![CDATA[Bonnefin Group]]></dc:creator><pubDate>Tue, 23 Dec 2025 03:39:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/39a39592-b5fc-406a-871b-c9e81184a4ab_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Everyone will have different thoughts on investing, the approach shared here are merely some ideas that have been rational and useful to us.</em></p><h3><strong>An Introduction to Value Investing</strong></h3><p>While some people are drawn to investing for the intellectual process and the journey it offers, for most investors and everyday individuals, it is the financial gains&#8212;and the personal benefits they bring&#8212;that provide the strongest reason to care about investing. The power of compounded returns is startling and many do not realise what a drastic difference it can make to their lives if taken seriously. Putting aside the investment method of choice, if you could grow your wealth, compounded and uninterrupted, at a rate of 10% annually, and you began at the age of 20 with $1,000, your investment would be worth $117,390.85 by the time you&#8217;re 70. It certainly requires a lot of patience, but if you allow even a small part of your wealth to compound and grow, it can bring about vast personal gains in your life. That is why investing is something we should all care about, for those who invest well will certainly have an advantage over those who don&#8217;t in the long term when it comes to financial wealth and comfort. Regardless of your age, it is never too late to begin the investing process, while those who are young have a time advantage, there is a Chinese proverb that we would do well to remember:</p><blockquote><p>&#8220;The best time to plant a tree was 20 years ago. The second-best time is now.&#8221;</p></blockquote><p>To begin with, let us be clear about what we mean by investing. Investing is committing resources in the present with the expectation of receiving a return on this commitment in the future. Furthermore, we view investing as a long-term commitment where the performance of the underlying asset results in the growth of your investment, short-term speculating and trading on price factors only is a different beast entirely. While there are certainly firms and individuals who have profited consistently and greatly with trading and speculating, short-term price movements is not an area we are particularly interested or proficient in, nor is it a path that we would recommend to the vast majority of the population. The data advantage, trading speed, access to markets, financial resources, mathematical genius, infrastructure, and discipline are just a few of the reasons for why there are only a rare few who are able to generate strong positive returns from trading over the long term. Value investing is a far more accessible endeavour that brings with it a track record of successful returns when done correctly. Many of the greatest investors and financial minds have been firm proponents of the underlying theory of value investing for good reason. The returns, intellectual journey, and societal role of value investing are just a few of the reasons why it is a compelling path to go down.</p><p>The financial world imposes a range of categories of investing upon us. One example is the positioning of growth and value investing as alternative strategies. Other categorisations revolve around the class of assets being invested in with private equity, venture capital, private credit, distressed debt, commodities, and fixed income being some of the common ones. However, if you take it at a fundamental theoretical level, all good investing is value investing. The simple reason being this, if you are buying something without an idea of its value, of what the asset is or could be worth, you are merely a victim of what the next person is willing to pay for the asset. This may occasionally yield good results, but to predict and be at the mercy of the whims of man tends not to be an enjoyable or fruitful endeavour over the long run. Your returns are rooted in greater fool theory, the hopes that there will be a greater fool who will be willing to purchase off you, rather than in more rational sources of returns. A profit on investment as a result of analysing and understanding an underlying asset&#8217;s ability to produce cash from operations is usually far more logical, predictable, and profitable than simply looking at market prices and hoping they will move in the direction you want.</p><p>Value can be a subjective term as different people may see different things in the same matter, but in the context of investing, the consensus agreement is that the intrinsic value of an investment lies in the cash an underlying asset can generate for its owner over its lifetime discounted to the present. In theory, it all sounds very simple, price is what you pay, value is what you get, and value investing is when you are trying to pay less than what you are getting. The objective is to identify assets that are trading for less than their intrinsic value and invest in them with the expectation that either the underlying cash produced will provide you with worthwhile positive returns, or the markets will eventually recognise their true worth and adjust the price accordingly. It may sound simple but in reality, it clearly requires a degree of skill or else everyone would be profiting from investing. The difficulty lies in finding and valuing assets better than the market does, and then having the disposition to follow through on your conclusions.</p><p>Some general points about value investing should be made quickly:</p><ul><li><p>Macroeconomic factors certainly affect the value of assets and thus a general understanding of how the overarching economy works is essential, however, value investing tends to be a bottom-up approach and it is the core asset in question that is the primary focus, not the overall economy.</p></li><li><p>Investing in a company means you own a percentage of the shares and therefore, a stake in its rights and business. Take this ownership mindset, combined with a genuine desire to understand everything about the business as if you were a primary owner of the company, and you will naturally achieve a greater level of understanding and mentality of your investment.</p></li><li><p>Growth is part of value.</p></li><li><p>Value investing is a process that requires diligent learning and thinking. For those who are not prepared to research and analyse, other alternative paths like passive investing are perhaps a better field of exploration, but that is a field that brings with it its own risks and problems.</p></li></ul><p>While the focus of this primer is on value investing in the context of companies and active stock picking, it is a framework of investing that can be used for anything that requires investment for future return, whether it be other forms of financial assets or personal commitments. Investing strategies and styles will always be in flux, as the world changes, everyone needs to adapt at times, but the fundamental theory of value investing is evergreen for it is rooted in first principles that seldom change. The means change but the ends stay the same.</p><h3><strong>The Art of Valuation</strong></h3><p>Investing is more art than science. This is a core idea that you must become comfortable with. While it feels comforting to rely on precise numbers and calculations&#8212;especially concerning money&#8212;chasing perfect predictions and valuations is a fool&#8217;s errand. Despite our advanced tools, there are far too many macro and micro-level variables for humans to comprehend and analyse to arrive at accurate predictions. In the world of investing, it is better to aim for general correctness than strive for absolute precision; such folly often leads many investors to be precisely wrong. Investing is an imperfect science requiring adaptive thinking and constant questioning, focusing on open-ended answers rather than repeatable processes yielding fixed results.</p><p>In finance, quantitative valuation methods&#8212;such as Discounted Cash Flow (DCF) analysis and Price-to-Earnings (P/E) ratios&#8212;serve as essential tools for estimating the value of a company or asset. These models provide insights into a company&#8217;s intrinsic value by relying on numerical data, including projected cash flows, earnings, and asset valuations. However, while these methods offer useful guidelines, they are not precise sciences. Their effectiveness hinges on assumptions about future performance, which can vary significantly based on countless factors. Moreover, they often overlook qualitative aspects&#8212;such as management quality, competitive advantages, and market dynamics&#8212;that are critical for understanding a company&#8217;s long-term potential. Given these limitations, it is essential to complement quantitative analyses with qualitative evaluations to achieve a more comprehensive view of a company&#8217;s value. This approach involves assessing factors like brand reputation, industry trends, and the overall business environment to capture the complexities that numbers alone cannot convey. It is crucial not to take quantitative figures at face value; instead, question the underlying assumptions driving these numbers and seek to understand the narrative they convey about the business. By delving into the story behind the numbers&#8212;examining the context, challenges, and opportunities that may not be immediately evident&#8212;investors can gain deeper insights into a company&#8217;s true potential. Integrating both quantitative and qualitative perspectives allows investors to make informed decisions, navigate market uncertainties, and enhance their chances of identifying valuable investment opportunities that might otherwise be obscured by mere numerical analysis.</p><p>As a result, several essential concepts should guide investors in their thinking and decision-making. The first, and perhaps the most important, is margin of safety. A margin of safety is the buffer or extra capacity included in a system, design, or investment to account for uncertainties, errors, or unexpected conditions. To protect against the complexity of precision and the inevitable mistakes in our thinking and decision-making, it is wise to maintain a significant margin of safety when making investments. For instance, if you believe a company is worth X, you should not invest if the price is merely slightly under X, but rather only if there is a substantial discrepancy between X and the current price, leading to a natural margin of safety. This principle applies not only in quantitative valuation but also serves as a general framework throughout the investment process. When analysing qualitative factors, it is beneficial to ensure that the competitive advantage of a potential investment is genuinely stronger than that of its competitors. If it is only marginally better, the qualitative strength of the business may quickly deteriorate, or you might be mistaken in assessing their advantage. The idea is this: you should only invest when there is such a large margin of safety that you can afford to be wrong about various factors while still having the potential for some gain&#8212;or at least capital protection. If you are right, the potential for large returns remains intact. While adhering to a margin of safety may eliminate many seemingly decent investments, it ultimately provides a higher floor and ensures a high ceiling for potential returns.</p><p>Market price represents the consensus valuation of an asset. In today&#8217;s environment, where most investors are institutional, do not be fooled into thinking that markets are always irrational. Intelligent and competent individuals and organisations make their living in the markets, so it&#8217;s essential to question why your valuation differs from theirs if a discrepancy arises. They may see something you have yet to consider. You do not need to be smarter or more accurate than the market for every single asset and investment; instead, you should look for occasional mispricings where you understand factors and reasoning that others have overlooked. Your goal is to identify situations where your individual valuation is more accurate than the market&#8217;s to profit from the difference.</p><p>Ultimately, quantitative earnings tend to determine the long-term price of an asset. However, in determining quantitative valuations, it is essential to remember that qualitative factors underpin the earnings and quantitative performance of a company. The revenue, expenses, and earnings of a business stem from the products they produce and sell. Factors such as competitive advantage, market positioning, value proposition, management, and numerous other qualitative elements are vital for accurately valuing a company. In fact, many great investments can be uncovered through qualitative analysis, as it is far easier for humans and algorithms to crunch numbers than to grapple with the open-ended complexities of qualitative evaluation.</p><p>The main takeaway is this: investing is not a precise science with fixed numbers and results. It is an open-ended endeavour that requires a multifaceted approach across various factors, including those without definitive answers. At times, it resembles an art form, demanding creative and open thinking with no immediate validation of your conclusions.</p><h3><strong>Pricing</strong></h3><p>In value investing, one fundamental principle is deceptively simple: price matters. While the quality of a business, its competitive position, and its growth prospects are critical, the price you pay for an asset ultimately dictates your potential return, your exposure to risk, and the margin of safety you possess. Understanding the multifaceted role of price is essential for any investor who seeks to generate superior long-term returns while minimising the risk of permanent capital loss.</p><p>The most straightforward way price influences investing outcomes is through the return you can achieve. Assuming you subscribe to the ideas of value investing and the belief that the rational return of an asset relies on the cash flows it can generate for its owners over its lifetime. If two assets have identical intrinsic values but are purchased at different prices, the lower-priced asset will deliver a higher return once its value is realised. This is because return, in its simplest form, is a function of the difference between what you pay and the value you ultimately receive. For example, if a stock is objectively worth $100 per share and you acquire it at $80, your potential return is 25%. Conversely, if you pay $95, your potential return falls to just over 5%. This illustrates a core tenet of value investing: the price you pay relative to intrinsic value is the most direct determinant of your eventual return. Ignoring this principle can turn a fundamentally sound investment into a mediocre or even losing proposition.</p><p>Even when intrinsic value can be reasonably estimated, the margin between price and value remains a critical factor. Paying full or near-full intrinsic value typically limits potential upside, while overpaying erodes returns and increases the likelihood of loss. Conversely, acquiring an asset at a meaningful discount to its intrinsic value creates both higher potential returns and a buffer against adverse outcomes. For instance, purchasing an asset at a 20&#8211;30% discount to intrinsic value provides not only the potential for capital appreciation but also a cushion against errors in your valuation or unforeseen market shocks. This margin of safety is central to the philosophy of legendary investors like Benjamin Graham and Warren Buffett, who emphasised that the key to successful investing is not simply identifying great businesses, but acquiring them at prices that offer compelling value.</p><p>Price is a direct determinant of investment risk. Paying less for an asset reduces the capital at stake, inherently limiting downside exposure. When an asset&#8217;s intrinsic value remains high relative to its market price, a lower purchase price creates asymmetric opportunity: upside potential is substantial while downside is constrained. The opposite is true when an asset is purchased at overvalued prices, which increases risk by placing more capital at stake relative to potential returns. However, not all low-priced assets are safe; some reflect permanent deterioration in business quality, creating value traps. True risk reduction comes from combining a favourable purchase price with rigorous assessment of fundamentals, ensuring that the price paid provides both a margin of safety and meaningful upside potential.</p><p>While intrinsic value provides a rational benchmark, market prices often diverge from fundamentals. What you pay may be driven not by the underlying business, but by what others are willing to pay. This introduces the risk of the so-called &#8220;greater fool&#8221; phenomenon, where investors buy overpriced assets hoping to sell them to someone else at an even higher price. Such speculation can generate short-term gains but exposes investors to significant downside if market sentiment reverses. True value investing resists this temptation. Instead of chasing market prices, disciplined investors focus on the relationship between price and intrinsic value. If your thesis relies solely on someone else paying more tomorrow, you are speculating rather than investing. Recognising this distinction is crucial for protecting capital and achieving consistent long-term returns.</p><p>Price is far more than a number on a trading screen&#8212;it is the fulcrum on which returns, risk, and opportunity balance. A disciplined focus on price relative to intrinsic value allows investors to capture attractive returns, reduce downside risk, and maintain a margin of safety even in uncertain markets. While market sentiment and speculation may tempt investors to chase prices disconnected from value, the enduring principle of value investing remains: paying the right price for a high-quality asset is one of the surest paths to long-term success.</p><h3><strong>Value Proposition &amp; Competitive Advantage</strong></h3><p>A company&#8217;s value proposition is the foundation of its success. It represents the unique combination of products, services, or experiences that a business offers to its customers. This is not merely a marketing slogan; it is a tangible, economic reality that ultimately determines whether a business thrives or falters. At its core, a company creates value when the benefits it delivers to customers exceed the costs required to provide them. In other words, the economic value created&#8212;sales minus costs&#8212;must be positive and sustainable over time. If a business consistently consumes more resources than it generates in returns, no amount of market hype or investor optimism can compensate for this structural weakness.</p><p>However, value is relative, not absolute. Even a product that objectively delivers strong utility may be of limited value if competitors provide superior alternatives. Customers continuously evaluate choices, weighing the benefits they receive against the options available in the market. Thus, a company&#8217;s value proposition must be understood not in isolation, but in comparison to what else consumers can access. A smartphone with excellent technical specifications may still struggle if rival devices offer a better user experience, brand prestige, or ecosystem benefits.</p><p>Value is also inherently subjective. Different consumers prioritise different attributes&#8212;some may care most about price, others about convenience, quality, design, or social status. Understanding the range of what people value, and how the company addresses these preferences better than competitors, is central to evaluating its real economic contribution. A product or service may be objectively good, but its perceived value in the eyes of target customers ultimately governs demand, pricing power, and long-term success.</p><p>Ultimately, a company&#8217;s value proposition is defined by what it offers to customers, and part of a sustainable proposition is ensuring that what it creates consistently exceeds what it consumes. This principle connects directly to the concept of economic efficiency: a business that generates more value than it consumes in delivering its offerings can reinvest the surplus into growth, innovation, or margin improvement. Without this balance, even a well-regarded product will fail to sustain profitability over time. In essence, the durability and attractiveness of a business&#8217;s value proposition&#8212;relative, subjective, and objectively productive&#8212;form the foundation for long-term economic success.</p><p>The creation and durability of a company&#8217;s value proposition is intimately linked to competitive advantage. Competitive advantage is what enables a business to create and maintain its superior value proposition relative to rivals over time. Without it, the forces of competition erode profits, and value creation diminishes. Competitive advantage is not static; it is the structural or operational edge that allows a firm to protect pricing power, reduce costs, or create differentiation that competitors cannot easily replicate.</p><p>Sustained competitive advantage, or economic moat, is the ultimate driver of enduring business success. Hamilton Helmer, in his seminal work <em>7 Powers</em>, identifies the primary sources of such moats: scale economies, network effects, counter-positioning, switching costs, branding, cornered resource control, and process power. Each of these powers enables a business to deliver superior value over time by creating structural barriers that competitors find difficult to overcome. For example, network effects allow a platform to become more valuable as more users join, while strong branding enables premium pricing without sacrificing demand. These moats translate directly into sustained profitability, consistent cash flows, and the ability to reinvest in the business for further growth.</p><p>Recognising the sources of competitive advantage is essential for value investors. It is not enough to identify a profitable company today; investors must assess whether the business can maintain its advantage into the future. This involves analysing industry dynamics, barriers to entry, customer loyalty, supplier relationships, and the replicability of key processes. Businesses that possess multiple powers&#8212;say, strong branding combined with network effects&#8212;are more likely to sustain value creation and deliver superior returns over decades.</p><p>Ultimately, the interplay between a company&#8217;s value proposition, its ability to generate net value, and its sustained competitive advantage determines its long-term attractiveness as an investment. Investors who focus on these principles can identify businesses that not only perform well today but are structurally positioned to thrive for years to come. By understanding what makes a company valuable, why it maintains its edge, and how its competitive advantage is structured, value investors gain a lens to separate temporary trends from durable opportunities.</p><p>In conclusion, the pursuit of value investing is not merely about numbers; it is about understanding the architecture of business success. A superior value proposition that creates more than it consumes, coupled with sustained competitive advantage, forms the bedrock of enduring economic returns. By combining rigorous analysis with an appreciation of both objective and subjective sources of value, investors can align themselves with companies that are structurally capable of delivering lasting wealth creation.</p><h3><strong>Probability, Risk &amp; Asymmetry</strong></h3><p>Investing, like most things in this world, is a probability game. The reason is simple, it involves making decisions in an environment of uncertainty, where outcomes cannot be predicted with absolute certainty. A wide range of factors, such as economic conditions, political events, market sentiment, and company-specific developments, all contribute to this uncertainty. Since, to our best knowledge, there is no one who can consistently predict these variables with absolute accuracy, investment decisions should be based on the likelihood of different outcomes and their payoffs. We must have the intellectual humility to admit that we don&#8217;t have the ability to predict and understand the specific outcomes precisely and definitively, instead the best we can do is work with the probabilities. When considering probabilities in investing, it&#8217;s not just about the simple likelihood of different outcomes but also the expected payout if those outcomes occur. Optimally you would look for high probability events with high payouts, but they can be hard to find. At times, lower probability events with higher payouts may be a more worthwhile investment than higher probability events with lower payouts. This is captured by the concept of expected value, which combines the probability of each outcome with the potential return or loss associated with it. Essentially, expected value helps investors evaluate whether an investment is worth pursuing based on the average result they can expect over time. However, expected value relies on the assumption of repeated trials or a large sample size. In other words, the law of large numbers suggests that the more often you engage in similar investments, the closer your actual results will match the expected value. This means that while individual investments may have uncertain outcomes, consistently investing with positive expected values should lead to favourable long-term results. This is where the concept of survival becomes crucial in investing. Because expected value hinges on the long-term, an investor must remain in the market long enough for the probabilities to play out in their favour. Capital preservation and managing risk are key to ensuring that short-term losses or fluctuations don&#8217;t lead to financial ruin. If an investor takes on excessive risk and suffers a major loss, they may be forced out of the game, unable to benefit from future positive outcomes, even if their overall strategy has a good expected value.</p><p>As a consequence of the probabilistic world of investing, focusing on process over results is essential because it can be difficult to judge investment performance and skill based solely on outcomes. In investing, where each decision involves uncertainty and a range of possible outcomes, good decisions don&#8217;t always lead to favourable results, and conversely, bad decisions can sometimes produce positive returns due to randomness or lucky circumstances. This inherent variability makes short-term results a poor indicator of true skill and effectiveness. To maximise skillful returns, investors should emphasise long-term performance, as a larger sample size provides a clearer picture of probabilities and reduces the impact of chance. By analysing the investment process&#8212;such as decision-making strategies, risk management, and adherence to a well-defined plan&#8212;investors can better understand and refine their approach. In this probabilistic framework, evaluating the process rather than just the results offers a more reliable measure of an investor&#8217;s ability, helping to discern skill from luck and leading to more consistent and sustainable success over time.</p><p>As part of understanding probabilities, one must have an understanding of the role of risk, or the downside, in any investment. Although much of modern finance seeks to quantify risk through volatility, we believe this to be a misleading definition; we instead believe risk should be defined as the chance of permanent capital loss. Non-linear returns are not of concern to us in the long term. The use of volatility as a measure is natural for it is comforting to be able to quantify risk, but in reality, there has been no perfect numerical indicator for the risk of financial loss in any investment so far. Analysing risk instead requires an analysis of qualitative and quantitative factors regarding the underlying asset, and while there will be general frameworks you can use to estimate risk in a given estimate, there will be no clear way to produce standardised and precise percentages of chances of permanent capital loss.</p><p>Beyond the obvious reason of not wanting to lose money, capital preservation and downside protection are crucial due to the simple yet powerful mathematical reality: losses and gains are asymmetrical. When an investment loses value, it takes a disproportionately larger gain to recover to the original level. This is due to the fact that percentage losses are calculated from a higher base, while subsequent gains are measured from a smaller one. For example, if an investment declines by 10%, it only takes an 11.1% gain to return to breakeven. However, a 50% loss requires a 100% gain to recover, and an 80% loss would demand a staggering 400% gain. This asymmetry underscores how much harder it is to recover from large drawdowns than it is to avoid them in the first place. The implications for investors are significant. Avoiding major losses preserves capital and allows compounding to continue uninterrupted. Since compounding is the foundation of long-term wealth creation, protecting the downside helps maximise returns over time. However, being careful with your downside doesn&#8217;t mean avoiding risk altogether; rather, it means taking measured risks where the potential reward justifies the exposure. A common quote attributed to Buffett is one that we should never forget: &#8220;Rule No. 1: Never lose money. Rule No. 2: Never forget Rule No. 1.&#8221; In saying this, we must be careful to reinforce the idea that we do not believe in volatilty as a true measure of risk or downside, as a result, avoiding short-term drawdowns in price are not what we refer to when it comes to capital preservation and downside protection, rather we refer to true long-term declines in the intrinsic value of an asset when it comes to ensuring we have downside protection.</p><p>One important thing to note is the role of leverage in risk. Leverage has been a major factor in many investment failures and financial wipeouts. While it can significantly enhance returns by amplifying gains, it also magnifies losses, which can lead to severe financial setbacks, including complete loss of capital. This risk is particularly pronounced in situations where leverage is used excessively, such has been the case for many long-gone but once thought invincible financial institutions. Survival is a critical aspect of successful investing. The ability to endure through market fluctuations and unpredictable events is essential for long-term success. Overleveraging increases the risk of being wiped out by unforeseen events, even if an investment has strong long-term potential. Plenty of investments have had long-term solvency and return potential but have been cut short due to sudden moments of market panic leading to leverage wiping out investors even though the underlying investments would have recovered and prospered in the long term. Therefore, we recommend minimising the use of leverage or avoiding it entirely. Prioritising financial stability and conservative use of leverage helps ensure that you remain in the game long enough to achieve good results and navigate through market challenges. As Warren Buffett famously said: &#8220;The first rule of investment is don&#8217;t lose. The second rule of investment is don&#8217;t forget the first rule.&#8221;</p><p>With all this in mind, one of the core ideas in applying probability to investing is to focus on finding asymmetrical bets where the potential upside outweighs the downside risk. If an investment offers equal chances of reward and risk, whether it be high, low, or medium, the expected value will balance out to zero, meaning that while you might experience some short-term gains due to luck, the long-term outcome will likely revert to where you started or worse. Furthermore, high risk does not always bring high reward, as markets can experience mispricing. In these cases, inflated prices may offer low return potential while simultaneously carrying a significant risk of future decline. This is particularly true when investor sentiment drives prices beyond the intrinsic value of an asset, creating an imbalance between risk and reward. In such scenarios, taking on high risk does not necessarily guarantee a corresponding high reward; instead, it exposes the investor to the possibility of substantial losses as the market corrects. To achieve asymmetric returns over the long term, it is essential to ensure that for every unit of risk you take on, there is a proportionately greater potential reward. This approach helps in identifying opportunities where the potential gains significantly exceed the potential losses, increasing the likelihood of favourable long-term results. The ideal investment is one that is low-risk and high-return, this is an obvious concept, and thus they can be hard to find, but it is best to not settle for much less than this.</p><h3><strong>The Role of Expectations</strong></h3><p>One core misconception by many is that good news should lead to an increase in asset prices and bad news leads to the reverse. While this is the case at times, there are other times when good news still leads to a decrease in asset prices and bad news leads to an increase. The simple reason for this is that the pricing of an asset not only involves the past and present performance of an asset but also future expectations. Everyone can see the balance sheet of a company and deduce how much cash can theoretically be liquidated for its shareholders, the more difficult question is how will the company perform in the future and how much cash will it produce for its owners moving forward.</p><p>Investors will price the future expectations into the price they are willing to pay in the present, and as time unravels and new information comes to light, investors will adjust their valuations and expectations of an asset. When news and performance are better than expected, and projections are more optimistic than previously assumed, prices tend to rise, while if news and performance are worse than expected, and projections are more pessimistic than previously assumed, prices tend to fall. The current price of an asset reflects the market&#8217;s expectations of its future, and when the future is realised, the prices are adjusted as the new reality and expectations replace what was previously only future expectation in the valuation assumptions. The ultimate returns of an asset are a matter of the underlying performance, but the pricing of an asset is a matter of the expectations placed upon the asset. When an asset performs better or worse than what the market expects, that is where you will find sources of investment profit.</p><p>The core focus in value investing should always be on the cash an asset can generate over its lifetime attributable to shareholders, discounted to the present, but it&#8217;s useful to understand pricing is a function of expectations and as such, how price moves often is how a company performs relative to expectations. Understanding the pricing by the market to a larger degree helps to clarify your own thinking, and reach better conclusions on what assets may or may not be valued incorrectly by the markets. In saying that, one must also keep in mind there are other determinants of price, especially as technical and momentum analysis becomes more popular within some investment institutions, and the same goes for the increase in popularity of passive investing which leads to indiscriminate buying/selling regardless of fundamentals. Other determinants include regulatory and internal rule frameworks for institutional investors that lead to price movement.</p><p>In modern-day investing, with the increasingly rare liquidation bargains, nearly all of the price paid is for the future return of an asset, as such, it is more important than ever before to understand the expectations of others and see if you can find cases of incorrect assumptions in their expectations. Ultimately, many people can broadly and accurately value what most companies already have, it is what the future brings that affects the valuations of the market and individuals. You must beat expectations for successful investing, as market expectations are reflected in market price. Whether it be due to short or long term factors, the price of an asset today reflects the expectations of the market. Identifying where these expectations are sometimes misplaced leads to opportunities to find situations where the value of an asset is misaligned with the price, and hence a chance to profit as an investor.</p><h3><strong>The Psychology of Investing</strong></h3><p>Everyone has a plan until they get punched in the face. These famous words by Mike Tyson are not just for the boxing arena, it&#8217;s relevant in all walks of life. The theory behind value investing, while not the easiest topic, is not particularly difficult either. The great divide between mediocre and exceptional investors is often not found in their understanding of valuation metrics or financial statements but in their ability to consistently make rational decisions. This ability is deeply rooted in specific psychological traits&#8212;discipline, patience, rationality, and a certain level of contrarian thinking&#8212;that are essential for long-term success in investing.</p><p>Value investing is not a field with clear-cut answers; it&#8217;s filled with open-ended questions. Markets evolve, companies change, and the economy fluctuates in unpredictable ways. The best investors are perpetual learners and deep thinkers. They seek to continuously expand their knowledge, not only of financial statements, business operations, and economic indicators but also of human psychology and the behaviour of markets. In this field, there is no single path to success. Each investor must carve out their own strategy, rooted in their own understanding and experience. Continuous learning allows for adaptation, as nothing is static. The learners and thinkers, those who embrace curiosity and intellectual humility, tend to perform better in this ever-changing landscape. They understand that markets are complex systems, and they approach each investment with an open mind, constantly refining their approach. Moreover, investing involves a considerable amount of independent thinking. You need the intellectual curiosity to develop your own understanding of a company&#8217;s fundamentals and the patience to apply this knowledge over time. Successful investors avoid herd mentality, which often leads to overvaluation or undervaluation of assets, and instead rely on their deep understanding of the companies and industries in which they invest.</p><p>In value investing, discipline is paramount. While many investors understand the theory of identifying undervalued assets, consistently executing this approach is far more challenging. Successful value investing is not just about spotting undervalued assets; it requires the discipline to avoid irrational decisions, especially during periods of market volatility when others may be moving in the opposite direction of what is rational. Figures like Benjamin Graham and Warren Buffett championed the idea that markets can be irrational, and success lies in buying quality assets at a discount, but only disciplined investors can maintain this strategy when markets are either plummeting or soaring. True discipline is tested when emotions run high&#8212;when panic leads some to sell at the bottom, or exuberance drives others to buy at the top. During such times, discipline bridges the gap between theory and practice, ensuring long-term strategies remain intact despite short-term market fluctuations.</p><p>Key traits like patience, rationality, and self-awareness are essential for long-term success. Value investing often involves playing the long game, waiting years for payoffs or opportunities, and without patience, investors may sell too early or miss out on full potential gains. Rationality is crucial to staying the course when psychological impulses&#8212;such as the natural desire to avoid pain and seek immediate gratification&#8212;pull in the opposite direction. In downturns, rationality helps override emotional instincts to sell, while in bull markets, it prevents investors from getting swept up in euphoria and overpaying. At the same time, managing one&#8217;s cognitive biases is critical, as even experienced investors can fall victim to mental shortcuts that cloud judgment. By maintaining intellectual honesty and staying aware of these biases, disciplined investors ensure their long-term strategy remains rooted in rational analysis, allowing them to avoid the irrational temptations that can derail success.</p><p>Value investing also requires a healthy dose of contrarianism. When the majority of investors are moving in one direction, it can be difficult to go against the tide. However, some of the best investment opportunities are found precisely when everyone else is selling or ignoring a particular sector. To be a successful value investor, you must be willing to stand apart from the crowd and trust your own analysis, even when the market disagrees. To do the same as everyone else is the ask for the same results as everyone else over the long run, to skilfully and consistently obtain above-average results you must be doing something different from the majority. However, contrarianism must be intelligent as being contrarian and wrong is the path to the worst of all performance.</p><p>Buffett and Graham famously talk about &#8220;Mr. Market,&#8221; the metaphorical figure who offers stock prices to you every day. Mr. Market is highly emotional, often offering stocks at prices that bear no relation to their intrinsic value. Sometimes, he&#8217;s overly optimistic and prices stocks too high. Other times, he&#8217;s overly pessimistic and offers them too low. The job of the value investor is to assess whether the price offered by Mr. Market is reasonable in relation to the underlying value of the business. The investor must not panic and sell out of fear when the price offered is too low, and the investor must have the confidence to take action when Mr. Market offers an irrationally low price, but also the restraint to avoid buying when prices are inflated. Confidence is important, but it must be tempered with realism. Investors who are overly confident may end up holding onto losing positions for too long or buying into bubbles, whereas those who lack confidence may miss out on valuable opportunities.</p><p>In the end, the theory behind value investing is relatively straightforward, but successful execution is far more challenging. It requires a deep understanding of not only the markets and individual companies but also your own psychology. The traits that differentiate great investors from the rest&#8212;discipline, patience, rationality, and independent thinking&#8212;are not learned overnight. They are cultivated through experience, reflection, and a willingness to continuously improve. While investing may begin with a plan, it&#8217;s the ability to stay rational and grounded in the face of psychological factors that often differentiate those who execute and those who don&#8217;t.</p><h3><strong>Understanding an Investment</strong></h3><p>Successful investing requires much more than a superficial grasp of what a business does. To make informed and profitable investment decisions, an investor must deeply understand the business&#8217;s economics, competitive position, operational processes, and the sustainability of its long-term advantages. Understanding the economics of a business goes beyond just its products and day-to-day operations. Investors must evaluate the company&#8217;s position within its industry and relative to its competitors. What gives this business an edge in the market? What makes its position difficult to challenge? These are vital questions for determining whether a company has the potential for sustained profitability.</p><p>For example, a common mistake that many investors make in the process of attempting to understand a business is assuming that new technology or innovations will automatically lead to higher profits. While increased productivity or a better value proposition can be appealing, competition plays a significant role in profitability. If competitors can quickly adopt the same technology or strategy, it leads to commoditisation, which often results in price wars and shrinking margins. Therefore, investors need to assess not just what a business is doing now, but how well it can differentiate itself and protect its competitive position in the future. In rapidly evolving sectors like technology or pharmaceuticals, it&#8217;s easy to assume a breakthrough product will bring immediate profits. However, without strong protective measures&#8212;such as patents, brand recognition, or specialised expertise&#8212;these innovations can quickly be replicated, eroding profitability. Thus, understanding how a business safeguards its innovations and how robust its competitive moat is becomes critical for investment analysis.</p><p>Anticipating where a business or industry will be in the future is just as important as understanding its current state. However, predicting the future is extremely difficult. Industries evolve, competitors adapt, and technological advancements often shift the landscape in unforeseen ways. Investors must have the humility and honesty to admit when the future is uncertain. Often, this means acknowledging that you simply do not know with enough certainty to make an informed decision. In such cases, it may be wiser to leave certain investments alone rather than taking unnecessary risks. Long-term thinking is crucial, but so is recognising when to refrain from making a move when the outlook is unclear.</p><p>Another essential principle of successful investing is recognising the limits of your knowledge, often referred to as the &#8220;circle of competence.&#8221; This concept encourages investors to stick to industries and companies they fully understand. Venturing beyond your expertise increases the likelihood of making poor decisions, as your analysis may lack the nuance required for accurate predictions. Without awareness of your circle of competence, overconfidence can cloud judgement, leading to poor long-term results. Staying within your knowledge base ensures a more disciplined approach and reduces the likelihood of costly mistakes.</p><p>There is a strong connection between being a successful businessperson and becoming an informed investor. As Warren Buffett once noted, &#8220;I am a better investor because I am a businessman, and a better businessman because I am an investor.&#8221; A businessperson&#8217;s insight provides a deeper understanding of the inner workings of a company and its competitive environment. Success in business goes beyond creating a great product&#8212;it requires managing resources effectively, staying ahead of competitors, and ensuring long-term sustainability. This knowledge is invaluable for investors when evaluating a company&#8217;s business model and its ability to maintain a competitive edge over time.</p><p>Understanding a business before investing requires more than just surface-level knowledge. It involves a comprehensive understanding of its economics, competitive landscape, and the unique advantages that set it apart from rivals. Additionally, investors must remain aware of their own limitations and avoid straying beyond their circle of competence. By focusing on long-term potential and recognising the risks posed by competition, investors can steer clear of common pitfalls and increase their chances of success. Ultimately, successful investing is not just about understanding what a business does, but about recognising the entire ecosystem in which it operates and positioning yourself to capitalise on long-term value creation.</p><h3><strong>Mr Market &amp; Margin of Safety</strong></h3><p>Although distinct concepts, Chapters 8 and 20 of <em>The Intelligent Investor</em> present two of the most fundamental ideas for shaping a value investor&#8217;s mindset: Mr. Market and the margin of safety. These principles provide both a mental framework for interpreting market behaviour and a practical guide for managing risk, forming the foundation of rational, disciplined investing.</p><p>Graham&#8217;s metaphor of Mr. Market captures the emotional and unpredictable nature of markets. Each day, Mr. Market offers prices at which you can buy or sell shares. Some days, he presents prices close to intrinsic value, offering little incentive to act. Other days, he becomes euphoric, inflating prices far above what the underlying business is worth. At times, he is despondent, offering deeply discounted prices that seem almost irrational. The critical insight is that these prices are not directives; they are conditional offers. Investors are free to buy, sell, or do nothing. By framing the market in this way, Graham teaches that prices are tools for opportunity, not measures of truth.</p><p>Understanding Mr. Market allows investors to gain patience, discipline, and emotional resilience. The intelligent investor can act decisively when mispricings occur and exercise restraint when prices are inflated. There will always be &#8220;dumb offers&#8221; to take advantage of, and opportunities to ignore offers that are unreasonably low if intrinsic value remains intact. Buffett emphasises that a rational investor could, in theory, ignore the market for years without consequence, as long as the underlying businesses continue to generate cash flows. Mr. Market is thus not the determinant of value; he is a mechanism for realising opportunity.</p><p>The margin of safety addresses the internal dimension of investing: the uncertainty inherent in judgment, forecasting, and analysis. Every investment involves imperfect information and assumptions about the future. The margin of safety provides a buffer against these uncertainties, limiting the risk of permanent capital loss while creating asymmetric upside potential. It ensures that even if some assumptions are incorrect or unforeseen events occur, the investment remains protected.</p><p>Quantitative analysis forms the traditional layer of the margin of safety. It can be applied to more than just making the final investment decision after valuing the difference between intrinsic value and price, although one should still ensure a large margin of safety between the two when investing. True quantitative margin of safety incorporates both the magnitude of the discount and the conservatism of the assumptions used to estimate intrinsic value. A meaningful discount provides a buffer against errors in valuation, market shocks, or operational challenges, and enhances the probability of favourable outcomes. The larger the discount, the greater the protection and the more attractive the expected return, reflecting both prudence and opportunity under uncertainty.</p><p>Importantly, the margin of safety should be assessed in the context of the assumptions underlying the valuation: projected earnings growth, margins, reinvestment rates, and discount rates. A seemingly moderate price gap may still provide ample safety if the intrinsic value was calculated using conservative assumptions; conversely, a bigger gap may be insufficient if the valuation already relies on aggressive projections. In saying this, one should not invest unless there is a large margin of safety to minimise risk and thus maximise return. Furthermore, as the margin of safety of an investment can often be represented by the gap between intrinsic value and price, the quantitative margin of safety represents not only protection against downside but also a rational basis for expected returns.</p><p>However, the margin of safety is not purely numerical. Qualitative analysis is equally critical, and it too must be viewed through the lens of margin of safety. Investors should evaluate factors such as competitive advantage, operational resilience, management quality, corporate culture, and strategic positioning. A great business is one that can withstand competitive pressures, economic cycles, and operational challenges without threatening the durability of its cash flows. Even when assessing qualitative factors, the principle of margin of safety applies: the company&#8217;s strengths should provide a buffer against unforeseen shocks or mistakes in judgment.</p><p>A high-quality business purchased at a meaningful discount to intrinsic value combines the protective aspects of both quantitative and qualitative margins of safety, ensuring that price, quality, and resilience align to maximise both safety and upside potential. By integrating both quantitative and qualitative analysis, the margin of safety becomes a multidimensional safeguard. It allows investors to act confidently when market prices diverge from intrinsic value, while limiting downside risk if projections or assumptions prove inaccurate. This dual approach transforms uncertainty into a structured opportunity and enables rational decision-making in volatile markets.</p><p>Mr. Market and the margin of safety are not theoretical constructs; they are practical, philosophical, and strategic tools that define the mindset of a disciplined investor. Mr. Market externalises volatility and creates opportunities, while the margin of safety internalises prudence and protection. Together, they provide the framework for navigating uncertainty, acting rationally amid emotion, and capturing asymmetric opportunities with controlled risk.</p><h3><strong>Portfolio Construction</strong></h3><p>Now that we have established an understanding of the theory behind picking individual stocks and investments, the next question is how do we construct an investment portfolio to maximise our overall returns, not just the returns of specific assets. The key principle to fundamentally keep in mind before engaging in further analysis and reasoning is that a portfolio is no more than the sum of its parts. It&#8217;s easy in portfolio construction to think from a top-down perspective, but remember that the top is made up of the individual decisions nd activities of the bottom, and as such, analyse a portfolio from both directions.</p><p>Before we delve into our specific views on how a portfolio should be constructed on this spectrum, it is important to discuss the crucial concept of sizing in investing. The return of an investment portfolio is not just about how often you are right or wrong about your specific investment theses, although it certainly helps to be right more times than not, rather it is how much you stake when you are right and how much you stake when you are wrong that determines the overall returns. For example, you could be right in most of your investment theses but stake very small amounts on them while having a large amount invested in an investment thesis that turns out to underperform or lose money. The inverse situation is a portfolio where you are wrong about your investment theses but stake small amounts on these bets while staking a large amount on a singular correct investment thesis. Out of these two options, despite being more accurate in your analysis of the underlying assets, the second option with high errors and low correct bets but with high staking on the correct bet, will, in all probability, outperform the investment portfolio of mostly correct bets but only small stakes on those bets.</p><p>The optimal portfolio, assuming you have to own more than one asset, is one where you are right about your investment theses and those that have the highest return profile are staked the highest to maximise the total returns. Assuming you cannot reach this level of perfection, then the next best portfolio one is where you are staked large on the winners, and staked small on the losers. Again, you would rather be correct more than not, and be staked large on those correct ones, but even if you are wrong more than you are correct, if you stake correctly, you will still outperform those who are mostly correct in individual analysis but poor in their sizing of investment. Picking all the winners but putting nothing or only a little on them brings you far less financial gain than the person who is wrong all the time on small bets but puts the house on a few of their rare winners. To keep it simple, the return of a portfolio isn&#8217;t really about how often you are right or wrong, but rather how much you bet when you are right or wrong.</p><p>Once the foundations of individual asset selection and position sizing are understood, the natural progression is to consider how these parts come together to form a coherent, high-performing investment portfolio. At the heart of portfolio construction lies a central tension: the trade-off between concentration and diversification. These represent two ends of a strategic spectrum. On one end, a portfolio may consist of only a handful of or even just one high-conviction position, while on the other, it may hold dozens or even hundreds of holdings, each comprising a small fraction of the whole. The primary argument for concentrated investing is simple and compelling: you only need to be right a few times to generate outstanding returns&#8212;but you must be right in size. Some of the largest fortunes in investing history&#8212;those of Buffett, Munger, Soros, Druckenmiller&#8212;were not built on owning 50 to 100 minor positions. They were built on a few dominant insights, where capital was deployed with size, conviction, and patience. There is a practical ceiling to the number of truly exceptional ideas an investor can have at any one time and throughout their career. Ideas that are deeply understood, rigorously vetted, and held with high conviction are rare. Therefore, when such ideas do appear, they should be weighted accordingly in the portfolio. This is the essence of what Charlie Munger meant when he said: &#8220;The wise ones bet heavily when the world offers them that opportunity. They bet big when they have the odds. And the rest of the time, they don&#8217;t.&#8221; From a probabilistic perspective, being correct on a few large positions can generate outsized returns, especially when capital is concentrated into the highest expected-value opportunities. In fact, a few large bets throughout a career spanning a few decades are all that are needed for extraordinary returns. Put it another way, the wealthiest people in the world tend not to be investors diversified into a range of underlying assets, but rather entrepreneurs with extreme concentration in only one asset for a reason: the reason being that you will earn far higher returns and wealth being all-in on the best opportunities rather than being spread thinly across a large of good to bad options, the caveat of this being that you have to find and manage to be part of the best, which is much easier said than done.</p><p>Concentration inherently magnifies both upside and downside. The most obvious risk of a concentrated portfolio is being wrong in size. Even with thorough research and sound reasoning, reality often contains factors we cannot anticipate&#8212;what some refer to as <em>unknown unknowns</em>. Macroeconomic shocks, management missteps, competitive disruption, regulatory shifts, or simple misjudgments in thesis quality can all severely impair a concentrated holding. And for the individual investor&#8212;without the institutional resources of active fund managers&#8212;these risks can be harder to monitor, mitigate, or exit in real time. This is where diversification has its rightful place. Not as a hedge for ignorance, but as an expression of humility&#8212;a recognition that conviction, no matter how well-founded, does not guarantee correctness. A moderate level of diversification&#8212;across industries, regions, or business models&#8212;functions as a risk control mechanism, buffering the portfolio from the outsized impact of any single failed investment thesis.</p><p>However, diversification taken too far becomes counterproductive. Beyond a certain point, it erodes the power of your insights and drags your portfolio closer to the average. The logic is simple: if you continue adding positions, especially into your 30th, 40th, or 50th best idea, you are no longer allocating capital based on high-conviction understanding but rather diffusing it in pursuit of safety through numbers. And the law of large numbers applies&#8212;the larger the sample size, the more likely you are to converge on median outcomes. In effect, excessive diversification becomes self-imposed mediocrity. This is especially irrational if, as a skilled investor, you can identify assets that are not just high-upside but also inherently safer than the average company. For instance, a business that sits at the forefront of innovation, possesses durable competitive advantages, boasts strong free cash flow, and is backed by robust financial resources is likely to have better long-term survival, compounding ability, and strategic optionality than the typical publicly listed company. If your top ideas meet this profile, then logically, your top 5&#8211;10 investments should carry less long-term risk than your 30th or 50th. Therefore, the idea that more diversification always equals lower risk is flawed. Risk should be measured not by the number of holdings but by the quality, durability, and clarity of understanding you have about each. Poorly understood diversification is no better than poorly understood concentration.</p><p>That said, some modest diversification is still prudent&#8212;not because you doubt your core ideas, but because markets are uncertain, and you can be wrong. Diversifying into a handful of uncorrelated but still high-conviction positions creates a margin of safety not by diluting quality, but by ensuring your portfolio isn&#8217;t fatally exposed to a single blind spot or unforeseen externality. In short, own enough to survive, but not so much that you forfeit the opportunity to thrive. Concentration in your best, most well-understood ideas is how wealth is built. Sensible diversification is how that wealth can potentially be protected from ruin. The key is knowing where confidence ends and overconfidence begins&#8212;and allocating accordingly.</p><p>In a concentrated portfolio, each holding wields significant influence over the whole. This makes it crucial to remain responsive to new information and to have ongoing vigilance. If the thesis changes&#8212;due to macro dynamics, deteriorating fundamentals, or management decisions&#8212;investors must be willing to reassess and adjust positions accordingly. Dogmatic holding is not the same as conviction; the former resists evidence, while the latter embraces it.</p><p>One of the most essential principles in portfolio construction&#8212;especially within a concentrated strategy&#8212;is the concept of opportunity cost. Capital is finite. Every dollar allocated to one investment is a dollar that cannot be allocated elsewhere. Therefore, the question is not merely &#8220;Is this a good investment?&#8221; but rather &#8220;Is this the best available use of my capital right now?&#8221; As an investor, your goal is not to simply hold assets that are expected to perform well in isolation, but to hold assets that offer the highest expected return relative to their risk and relative to all other opportunities you could be pursuing. In essence, you should be constantly optimising your portfolio to contain only those investments with the lowest opportunity cost&#8212;those for which no clearly superior alternatives exist within your circle of competence. This is not a call to trade frequently or to time markets. It is a call to actively monitor both the fundamentals of current holdings and the universe of potential replacements. Investments should be held not out of inertia or hope, but because they continue to represent the best possible deployment of capital given current knowledge.</p><p>Selling an investment is often harder than buying one. Behavioural biases, such as loss aversion or the endowment effect, can lead investors to hold onto underperforming assets too long, or resist swapping them out for better ones. But to be an effective capital allocator, you must be willing to reevaluate and reallocate when the facts change or better options emerge. There are generally two broad conditions that should prompt a sale or reduction of a position. The first is if there is a deterioration in the fundamental thesis. If new information emerges that challenges the core assumptions of your investment&#8212;declining competitive advantage, deteriorating balance sheet, loss of pricing power, management misalignment, or industry disruption&#8212;then a reassessment is required. In a concentrated portfolio, where each position materially affects total return, there is little room for thesis drift or complacency. The opportunity cost of holding a deteriorating asset can be immense. The second reason is if there is an emergence of superior opportunities. Even if a holding is performing adequately, it may still be prudent to reallocate capital if you identify an investment with better upside potential, lower risk, or both. This is where true capital discipline comes in. The mere existence of a decent idea is not enough; the bar should always be the best available idea. Great investors understand that the only investments worth holding are those that dominate the available opportunity set. However, one must also remember the transaction costs involved in changing positions and factor that into the opportunity cost in any given moment. Being open to reallocating capital does not mean trading reactively or impulsively. Instead, it means maintaining a mindset of fluidity over fixity. You are not married to any holding. Every position should earn its place in the portfolio every day it remains there.</p><p>One of the most underrated virtues in portfolio construction is patience. Investing is not a game of constant action, nor should a portfolio be a repository for mediocre ideas simply to appear &#8220;fully invested.&#8221; In reality, compelling opportunities are rare, and discipline requires the willingness to wait until they emerge. Holding cash is not a sign of indecision or fear&#8212;it is a rational response when the opportunity set fails to meet the required risk-reward threshold. Sometimes, the optimal allocation is to do nothing, to preserve optionality, and to remain intellectually and financially ready. As Warren Buffett famously said, &#8220;The stock market is a no-called-strike game. You don&#8217;t have to swing at everything&#8212;you can wait for your pitch.&#8221; This idea runs counter to the hyperactive tendencies of many investors who feel pressure to be constantly invested or to own a large number of positions. But superior portfolio construction often comes not from filling every slot, but from resisting marginal ideas and concentrating only on those rare investments that justify both capital and conviction. The ability to wait&#8212;deliberately and rationally&#8212;is as much a part of investing as the ability to act.</p><p>In the end, building a high-performing investment portfolio is not a function of complexity, volume, or constant activity&#8212;it is a function of clarity, conviction, and discipline. A well-constructed portfolio is one that reflects your best thinking, your deepest insights, and your most asymmetric opportunities, sized appropriately to their true potential and their risks. Concentration in a few high-conviction ideas with a degree of diversification is how we believe meaningful wealth is best created and sustained. What separates great investors from average ones is not just the ability to pick winners, but the ability to size them correctly, to walk away from mediocrity, and to reallocate capital without attachment when superior opportunities arise. A portfolio should not be a static collection of past decisions, but a living, breathing representation of your best ideas&#8212;right now. Every position you hold should justify its place, not just on its own merits, but in light of all that you could own instead. The ultimate goal is not to avoid being wrong, but to ensure that when you are right, you are right in size&#8212;and when you are wrong, the damage is survivable. Portfolio construction is therefore not merely a technical task, but a strategic discipline&#8212;one that sits at the core of successful long-term investing.</p><h3><strong>Outperformance</strong></h3><p>In the world of investing, active management is only financially worthwhile if you consistently outperform passive benchmark indices. The prevailing wisdom for most investors today is to focus on passive investing, which has generated attractive returns over recent decades, especially if compounded over time. However, passive investing carries its own risks, challenges, and limits, making active management a potentially worthy pursuit for those willing to invest the necessary time, energy, and expertise. Achieving investing outperformance is no easy feat, particularly in today&#8217;s competitive market. You must remember, in a world where the majority of investors are institutional, the counterparties you are trading against, and the market itself are composed of extremely intelligent and well-accomplished individuals backed by the frameworks and resources of large, established firms. You are competing, on average, against people who have achieved top marks throughout their schooling lives, have an extreme work ethic, are ambitious and driven, and are mentored and taught by institutions with decades, if not centuries, of experience. It requires a degree of strong confidence, teetering on arrogance, to think you can take on the market and emerge as a superior performer. The key to outperformance lies in a combination of analytical rigour, critical thinking, patience, and an acute awareness of market inefficiencies that can be exploited&#8212;especially when others are blinded by herd mentality or lack the ability to connect interdisciplinary ideas.</p><p>The Efficient Market Hypothesis argues that markets reflect all available information, making it impossible to consistently achieve excess returns. This is a crude explanation, and the debate surrounding the validity of this theory will not be covered in this primer, but we will simply assert our foundational conclusion which is that while markets can be efficient to a degree, there are certainly inefficiencies that have always existed and we believe will continue to exist. If you do not believe in market inefficiencies there is no reason to be pursuing active investment. However, it is undeniable that as markets have evolved, investing has become more competitive. The advent of technology, the availability of massive amounts of data, the spread of investing and financial theory, and the democratisation of financial tools have levelled the playing field to an extent. Consequently, bargains and mispricings are arguably harder to find, and opportunities for easy, outsized returns are scarce. In this environment, active investors need a well-honed edge that allows them to see opportunities others miss.</p><p>One of the most important, yet often overlooked, factors in achieving outperformance is survival. Investing is not just about generating the highest returns in the short term but also about avoiding catastrophic losses that can wipe out gains and capital. Investors who survive over the long term, avoiding ruinous decisions, are more likely to benefit from market rebounds and compounding returns. Durability, therefore, is a cornerstone of long-term success. This requires a conservative approach to risk management, maintaining adequate liquidity, and avoiding leverage, or overleveraging if you must use leverage. Those who can stay in the game long enough have the opportunity to capitalise on market cycles and opportunities that others may miss due to poor risk management or being wiped out during downturns. For the institutional investor, the capital structure becomes highly relevant in this department, as institutions built on permanent or more stable capital bases will be far more able to focus on long-term success rather than short-term incentivised structures.</p><p>Another one of the most valuable advantages an investor can have is possessing an analytical edge. An analytical advantage in investing comes from not only seeing the same data that everyone else sees but interpreting it in a way that others have not yet considered. As the quote goes, &#8220;The task is, not so much to see what no one has yet seen; but to think what nobody has yet thought, about that which everybody sees.&#8221; This speaks directly to the essence of analytical outperformance. Investors who can take commonly available information&#8212;financial metrics, market trends, or company data&#8212;and apply a unique lens, combining both quantitative rigour and qualitative insights, can uncover opportunities that are hidden in plain sight. By interpreting financial statements in a new light or considering qualitative factors like management quality and market positioning, an investor gains a deeper understanding of a company&#8217;s true potential. This fresh perspective allows them to act before the market fully recognises the value, leading to outsized returns. Ultimately, analytical advantage stems from the ability to process and synthesise information in a way that others cannot, providing a durable edge in an increasingly competitive market.</p><p>The best investors possess a natural disposition that allows them to remain rational and clear-headed when others succumb to emotional responses. This quality is akin to the principles of virtue ethics espoused by Aristotle, which emphasise that good character leads to sound moral decision-making. In investing, this means cultivating virtues such as patience and emotional control, which are essential for navigating the market&#8217;s inherent volatility. While markets can fluctuate wildly in the short term, they tend to reward patient, long-term investors. The ability to stay calm during market downturns or periods of irrational exuberance is crucial for achieving success. Investors who panic-sell during market crashes or chase overpriced assets during bubbles often find themselves facing significant underperformance. Patience transcends merely holding onto investments; it embodies the wisdom to wait for the right opportunities. Many investors falter because they feel compelled to act, even when no clear opportunity presents itself. The best investors understand that inactivity can be a virtue, recognising that waiting for the right price or moment to strike&#8212;despite the discomfort of sitting on cash for extended periods&#8212;can lead to more prudent decisions. By embodying certain traits that reflect their investment philosophy, investors align their character with effective strategies, ultimately enhancing their capacity to make sound choices in the market landscape.</p><p>One of the less discussed but powerful advantages in investing comes from consilience&#8212;the ability to draw knowledge from multiple disciplines. Most investors tend to be narrowly focused on financial metrics and traditional analysis, but the best opportunities often arise when investors can see connections across different fields, such as technology, psychology, sociology, science, and economics. This broader perspective enables investors to recognise trends that others may miss or to understand the potential impact of innovations before they are fully priced into the market. Generalists may outperform specialists because they can apply a broader range of knowledge to their decision-making. They are not constrained by the narrow confines of one discipline and can often see opportunities that more specialised investors overlook. Furthermore, because most investors are not operators, they can miss the basic realities of certain industries. A generalist with knowledge across a wide range of sectors may have a better understanding of how specific innovations or changes in one industry can ripple across others, providing an edge in identifying mispriced assets.</p><p>Beyond all, the truth is that consistent outperformance requires a contrarian mindset when needed. True gains arise not from following the crowd, but from acting where consensus is wrong. To identify and capitalise on opportunities, an investor must think independently, be willing to question prevailing narratives and go against popular sentiment when evidence justifies it. This path, however, is psychologically demanding: the contrarian often appears misguided, stupid and reckless to others before ultimately being vindicated if correct. Mistakes are also more punishing when made in isolation, as the safety net of collective error is absent. Success as a contrarian demands not only intellectual courage and discipline, but also the patience to endure prolonged periods of doubt, criticism, and solitude before the market recognises the truth. And that is only if you are correct; you may still be wrong. In other words, if you are correct, you have to walk a lonely road and potentially be ridiculed and jeered as a fool to earn your superior returns, and if you are incorrect, you will walk a lonely road and potentially be ridiculed and jeered as a fool and finish with inferior returns. Add to this the reality that in a probabilistic world, you can be more intelligent, take a more rational decision, and still end up with a worse result, and you can begin to see the potential pain in being a contrarian. Do not be contrarian for the sake of being contrarian. Aim to be right, and if in the process of being right you have to be contrarian, then so be it.</p><p>In today&#8217;s competitive and increasingly efficient market, achieving outperformance through active investing requires a combination of traits and skills. Analytical rigour, critical thinking, patience, and a broad, interdisciplinary approach all contribute to identifying and capitalising on market inefficiencies. Survival and durability ensure that investors are around long enough to take advantage of these opportunities, while character and emotional control prevent them from succumbing to the irrational decisions that often lead to underperformance. The modern investor must not only master financial analysis but also cultivate a mindset that allows them to thrive in an ever-changing, increasingly competitive landscape.</p><h3><strong>Resources</strong></h3><p>The following is a list of a few resources you may find useful on your journey.</p><p>Books</p><p><em>The Intelligent Investor</em></p><p><em>Margin of Safety</em></p><p><em>University of Berkshire Hathaway</em></p><p><em>Poor Charlie&#8217;s Almanack</em></p><p>Videos and Talks</p><p><em>Berkshire Hathaway Annual Shareholder Meetings</em></p><p><em>The Practice of Value Investing</em></p><p><em>The Psychology of Human Misjudgement</em></p><p>Articles</p><p><em>Berkshire Hathaway Letter to Shareholders</em></p><p><em>Oaktree Insights</em></p><p><em>Consilient Observer</em></p><p>Podcasts</p><p><em>Business Breakdowns</em></p><p><em>Acquired</em></p><p><em>*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.</em></p><p><strong>Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/a-primer-on-value-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/a-primer-on-value-investing?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.bonnefinresearch.com/p/a-primer-on-value-investing/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.bonnefinresearch.com/p/a-primer-on-value-investing/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item></channel></rss>