The latest AI article by Dario, CEO of Anthropic

Thomas recommended an article to me yesterday: "Machines of Loving Grace - How AI Could Transform the World for the Better," written by Dario Amodei, CEO of Anthropic. Dario earned a Ph.D. in biophysics from Princeton University and was a postdoctoral researcher at Stanford University School of Medicine.

Concerns

In the article, Dario raised many concerns about the development of AI:

  • The basic development of AI technology and many (but not all) of its benefits seem inevitable (unless risks derail everything), driven primarily by powerful market forces. On the other hand, risks are not predetermined, and our actions can significantly alter the probability of these risks occurring.

  • When AI companies overemphasize the benefits of AI, it easily gives the impression that they are avoiding negative issues or even engaging in propaganda. I believe that, in principle, speaking too much "from one's own perspective" is also harmful to personal integrity.

  • I often feel uncomfortable with the way many public figures discussing AI risks (let alone leaders of AI companies) describe the world after AGI (Artificial General Intelligence). It seems like they want to act as prophets leading people toward "redemption." I think it is dangerous to view companies as unilateral forces shaping the world, and it is inappropriate to interpret technological goals in a religious manner.

  • While I think most people underestimate the potential of powerful AI, those currently discussing radical AI futures often do so in overly "sci-fi" tones (e.g., talking about uploading consciousness, space exploration, or cyberpunk-style futures). This makes it difficult for people to take these claims seriously and casts a layer of unreality over these ideas. To clarify, the issue is not whether these technologies are possible or how likely they are to happen (the main text discusses this in detail), but rather the "atmosphere" implies certain cultural baggage and unstated assumptions about what kind of future we desire and how social issues will evolve. Ultimately, such discussions often resemble the fantasies of niche subcultures, alienating most people. (Yesterday's Elon Musk event followed this sci-fi route.)

is equally indispensable.

Basic assumptions and frameworks

Dario prefers using "powerful AI" instead of "AGI" (Artificial General Intelligence) and believes this AI might resemble today's LLMs (large language models), but potentially based on different architectures or training methods. It has several key characteristics, summarized simply as "

  1. : In fields such as biology, programming, mathematics, engineering, and writing, this AI surpasses Nobel laureates in pure intelligence. It can prove unsolved mathematical theorems, write extremely complex code, and even produce excellent novels.

  2. : This AI is not just "an intelligent entity you can converse with"; it has all human operation interfaces in virtual environments, including text, audio, video, mouse/keyboard control, and internet access. It can operate through these interfaces, performing tasks on the internet, issuing or receiving instructions from humans, ordering materials, guiding experiments, watching or creating videos, and more, surpassing the best humans globally in completing these tasks.

  3. : This AI does more than passively answer questions; it can be assigned tasks that take hours, days, or even weeks to complete, executing them independently like a smart employee, only asking for clarification when necessary.

  4. : Although AI has no physical form (except for its presentation on computer screens), it can control existing physical tools, robots, or experimental equipment via computers. Theoretically, it could even design robots or devices for itself to use.

  5. : The resources used to train this AI can be reused, running millions of instances (expected to match cluster scales by 2027), which absorb information and generate actions 10 to 100 times faster than humans. However, it may be limited by the response time of the physical world or the software it interacts with.

  6. : Each instance can independently perform different tasks or collaborate like humans, working together as multiple instances. Depending on the task requirements, different subgroups can be fine-tuned to excel in specific tasks.

Dario gave an interesting example of an economist: usually, discussions focus on the marginal returns of production factors, such as labor, land, and capital. Simply put, a particular production factor may become a limiting factor in specific situations. For example, air forces need both planes and pilots, but adding more pilots is useless without enough planes.

In the AI era, we should start discussing the "marginal returns to intelligence." As AI becomes increasingly intelligent, we need to understand which other factors complement intelligence and what factors will become new bottlenecks or limitations when intelligence reaches extremely high levels.

Therefore, to understand how intelligence affects the speed and effectiveness of production and problem-solving, we need to ask the question:

how much does being smarter help with this task, and on what timescale?

Limit

Dario mentioned several major constraints:

  1. : Hardware, material science, communication with humans, and even existing software infrastructure all have upper limits on their speeds. Additionally, scientific experiments are often sequential, where one experiment depends on the results of the previous one. This means the speed of completing large projects (such as developing cancer treatments) may have an irreducible minimum, even if intelligence continues to improve, this speed cannot be accelerated.

  2. : For example, today's particle physicists, despite being very intelligent and proposing many theories, lack sufficient evidence to verify or choose among different theories due to limited data from particle accelerators. Even if AI becomes super-intelligent, it may not significantly advance progress without enough data.

  3. : Some things are inherently unpredictable or chaotic, and even the most powerful AI cannot predict them significantly better than humans or current computers. For example, predicting chaotic systems in the three-body problem, AI's advantage is limited to slightly extending predictions beyond today's technology.

  4. : Many things cannot be accomplished without violating laws, harming humans, or disrupting society. Nuclear energy, supersonic flight, and even elevator technology are advanced, but their impacts are greatly reduced due to regulations or fear.

  5. : This is a stricter version of the first limitation. Certain physical laws seem unbreakable. For example, it is impossible to exceed the speed of light, mixed pudding cannot be separated again, there is a limit to the number of transistors per square centimeter on chips, and beyond a certain range, they become unreliable. Computation requires a certain amount of energy to erase each bit, limiting the computational density in the world.

Timescale

In the short term, some factors may be hard limits that AI cannot break through, but in the long term, these limitations may become more malleable. For example:

  • : AI might develop new experimental methods, allowing data previously obtainable only through live animal experiments to now be achieved through in vitro experiments.
  • : AI can help us design and build new tools, such as larger particle accelerators, to obtain currently lacking scientific data.
  • : Within ethical boundaries, AI might help us improve clinical trial systems, reduce bureaucracy, and even create new jurisdictions to make clinical trials more efficient or cost-effective. It could also improve scientific technology, reducing the need for human clinical trials.

We should look at the role of intelligence from a dynamic perspective. Initially, intelligence may be severely constrained by other production factors, but over time, AI will gradually find ways to bypass these bottlenecks, although they will never completely disappear (e.g., physical laws).

So another question we need to ask is:

How fast does it all happen and in what order?

Tomorrow's notes will begin analyzing the direct improvement potential of AI in human life across these five areas:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty issues
  4. Peace and governance
  5. Work and meaning