The Knowledge Problem and The Feedback Mechanism
A few days ago, under one of my FB posts on AI and the knowledge problem, I saw this interesting conversation between two economist friends of mine. One writes: “AI can for sure discover types (quite easily I believe). Also, markets don't "anticipate" changes in anything (at least not in a fundamental way).” The other replies:” That’s not the point. The point for AI not substituting markets is the inability of AI to “anticipate” what markets would do.”
This latter comment sparked some thoughts.
Let me begin with what I find obvious: the definition of optimal allocation cannot be parasitic on market allocation. Any normative concept of “optimal” should be stated without smuggling in the very institution whose merits we want to appraise. Pareto efficiency, Rawlsian max-min, Kaldor-Hicks potential compensation, or a utilitarian social-welfare function can all be defined without mentioning prices, firms, or equilibria. Once we have that independent yardstick, we can ask how well markets (or AI planners, or democratically chosen rules) approximate it.
General-equilibrium theory shows that, under heroic assumptions, a perfectly competitive equilibrium is Pareto efficient. But that is an overlap, not an identity: Pareto efficiency is defined in terms of preference satisfaction and feasibility, while competitive equilibrium is defined in terms of price-taking behavior and zero excess demand. The match is contingent, not conceptual.
If we collapse the two, we risk circularity: “Markets are good because they yield the market outcome.” By decoupling the yardstick from the mechanism, we keep space for criticism and for comparing alternative mechanisms (including AI-guided allocation) against the same external criterion.
To see why this distinction matters, imagine a social planner armed with a state-of-the-art machine-learning oracle. Suppose the oracle could forecast every individual demand schedule and technology set with uncanny accuracy. Even then, the question “What allocation is best?” cannot be answered merely by computing the equilibrium of an artificial Walrasian auction. We still need a normative compass, a principle telling us which feasible allocation ranks highest, before any algorithm can get to work. Whether we choose Pareto, a weighted utilitarian score, or some capabilities-based index, that compass is logically prior to, and methodologically separable from, the market feedback.
What we usually praise as “the magic of markets” is really two distinct engines working in tandem (on this see Hurwicz, 1972; and Maskin, 1999). The first is knowledge discovery: millions of agents, each holding local, often tacit information, reveal slivers of what they know simply by making offers to buy and sell. The second is error correction: when someone guesses wrong (e.g., plants grapevines nobody wants, orders too much saffron, misprices a battery pack) profit and loss push them to revise their plans or exit. Prices are just the interface where those two engines meet.
Large language models already ingest oceans of data, so it is tempting to say, “Just feed the oracle every sensor reading and sales receipt on earth, and it will spit out the socially optimal plan.” But that vision quietly assumes away the very process that produces the data. Without a decentralized feedback loop that rewards good conjectures and punishes bad ones, the stream of observations grows progressively less informative. The oracle has no built-in mechanism for weeding out bad guesses, the role that real markets delegate to agents risking their own balance sheets.
Markets are good at knowledge discovery because they unleash countless heterogeneous searchers, each wagering real money and reputation. AI might replace that process and potentially improve it by predicting types and also by scanning weak signals (e.g., patent texts, supply-chain hiccups, social chatter) faster than humans ever could. Markets, though, shine at error correction: losses have teeth. Here, AI (at present) can only complement rather than replace, by running rapid counterfactuals and stress tests that highlight errors sooner and cheaper than a bad quarter’s earnings might.
If the goal is to meet whichever independent normative standard we adopt, the right question is not whether AI can anticipate markets. The sharper question is whether any non-market mechanism can simultaneously (i) gather decentralised knowledge (ii) punish errors quickly enough. I think AI could potentially do well in (i) but I’m agnostic as to whether AI will be able to bring about (ii).
However, as I have hinted in my previous post, there’s another way in which AI can partly replace markets, and it is by eliminating scarcity in some segments of the economy by pushing marginal costs towards zero. Think of fully synthetic media, self-replicating code, or drug-discovery pipelines in which the expensive wet-lab phase is replaced by accurate in-silico simulation. When copying an extra file, rendering an extra image, or searching an additional chemical conformation costs almost nothing, prices stop performing their usual rationing role; anyone who wants the good can have it without crowding anyone else out. In such “post-scarcity” niches, allocation problems collapse into bandwidth or attention problems, and those can be handled by queues, open licenses, reputation systems, or simple first-come-first-served algorithms rather than full-blown markets.
That said, most economic activity is still tethered to scarce complements (e.g., lithium for batteries, land for data centers, skilled engineers’ time, or simply the electricity to run the servers). As long as some bottleneck remains rivalrous, prices will keep surfacing information about opportunity cost. So the immediate future is likely a patchwork: segments where AI really does make scarcity evaporate and markets fade into the background, and much larger swaths where prices, profit, and loss continue to supply the discipline that keeps knowledge discovery and error correction alive.
