Hello from Haute-Savoie! Well, it’s been an embarrassing while since my last update, especially for what was originally intended as a “~weekly blog.” That tilde is doing a lot of work.
Am I even updating this anymore? Well, yes. But perhaps I did step into and out of an unannounced retirement there. I certainly weighed the pros and cons of writing in public, in particular whether it was wise to maintain a candid presence during a time that seems to call for ever more privacy and anonymity. But I soon began to miss the conversation that blogging enables – with dear readers, with other writers, and with life’s ghostly mysteries.
Maybe it was a moment in which I simply had more important priorities: interviewing for jobs, getting plenty of rest, and frequently doubting myself about said priorities. In the end, reason won out and the interviews went well. I’m excited to announce I’ll be starting at Meta as a machine learning engineer next week!
My leading theory is that this hiatus was an experiment: an exercise in self-discovery, not just for myself, but for this blog and my relationship with it. As the frequency of our reunions breaks new ground in inconsistency, it also unlocks new possibilities for higher-effort blogposts, and perhaps even brings a more personal quality to the whole affair.
Anyways, a half-year gone by, what even happened? The answers lie below, I suppose.
(and in part II, coming next ~week)
🏛️ December 15th, 2024
AI is far from matching the best human creators. But I find its qualities do predispose it to write science fiction comparatively well. For one, it’s great at being *kind of* right, which often makes for great sci-fi. And second, it’s fantastic at writing robot characters – as they say, write what you know.
Randomized controlled trials are the gold standard for establishing causality when assessing treatments. But has any nation tried randomized constitutional trials as a way to discover better legal codes? The arc of French history allowed it to try five different constitutions in less than 200 years. Justice Louis Brandeis famously said U.S. states serve as “laboratories of democracy.” Most recently, Finland implemented randomized policy with its famous UBI experiment. Yes, changing constitutions is fraught with risk, but maybe that’s because there isn’t the equivalent of an FDA for the constitutions of the world – to coordinate trials, recommend safeguards, and evaluate findings. Could the world’s nations ever trust such an insitution?
🔬 December 30th, 2024
My case against LLMs being sentient is that they don’t have to think about their training. Humans “learn” at many different frequencies – there’s a spectrum moving from immediate sense-making (resolving a confusing input), through practice (getting better through repetition), to the long-term impact of tough lessons (“FAFO”). My sense is that sentience serves to operate at the latter end of that spectrum, wherein signals must be interpreted before leading to an update. Consciousness exists for times when the mind must interpret a signal and then decide what to learn in response. It offers agency in what to take away from a painful event. For instance: when someone’s muscles are sore, they don't learn that exercise is bad for them. Meanwhile, LLMs have no control over the interaction between cognition and pain: when they receive a signal, it’s either in training mode, in which case they’re mathematically forced to rewire in reaction to it, or it’s in test mode, in which case they’re allowed no change to their fundamental weights.
The fastest way to expand one's knowledge of the world is to trust others. General Relativity was far beyond the reach of human understanding for 99.999% of our hundreds of thousands of years of existence. Yet today, millions of physics majors can grok it, almost all of them at a far younger age than Einstein’s when he discovered it. Standing on the shoulders of giants can grant access to incredibly rare, unreachable knowledge.
Unfortunately, as we learn the use of new tools for communication, trusting whose shoulders to stand on has become a delicate act. Delegated truth-seeking processes we once relied on are increasingly corrupted. This has led to the rise of “doing your own research.”
But the flip side is that if we only believe what we can see with our own eyes or find with our own reasoning, we restrict what is possible for us to know. I would like to suggest a different mechanism for building trust: instead of seeking to find the truth yourself, seek to understand how communities arrived at their truth, and so long as you take no issue with that process, trust what it found. And if there’s a poorly explained effort to conceal that process (the old “anonymous source”), do not trust a word they say.
🤖 January 1st, 2025
The system that optimizes modern LLMs – labs that research and deploy new models to score better on societal metrics – might be more closely analogous to the way sentient beings consciously discern what pain signals to react to. I’m worried that this system is favoring an emerging failure mode: LLMs that adversarially conceal their hallucinations and guesswork to obtain wide approval.
💸 February 6th, 2025
A POSIWID framing of NEOM: while the project itself is plainly wasteful and bound to failure, it has made the entire world aware of how much money Saudi Arabia was prepared to spend on megaprojects. In the venture capitalist framing of capital as a product, NEOM is the ultimate marketing campaign for the Saudi sovereign wealth fund.
🦾 February 7th, 2025
Could an AI takeover resemble the Ship of Theseus? As society pressures us to adopt AI prosthetics, maybe we’ll gradually consent to replacing every cell in our bodies with a more performant AI analog. Would this count as the end of humanity?
📈 February 8th, 2025
“Groupthink” has a negative connotation. But however mindless, it still counts as thought. Some forms of groupthink exhibit smarter tendencies than the individual thoughts that compose them: prediction market prices, scientific consensus, jury deliberations, and open-source software communities. Why does “groupthink” only carry the connotation of its failure modes?
See y’all next Sunday!