How Google DeepMind scientist Nicholas Carlini uses AI (Part 1)

An recommended an article to me: "How I Use 'AI'" written by Nicholas Carlini. Link: https://nicholas.carlini.com/writing/2024/how-i-use-ai.html.

Nicholas Carlini is a scientist at Google DeepMind.

First, Nicholas Carlini believes that AI (mainly referring to large language models) has not been overhyped. Although the internet has experienced bubbles in the past, many valuable things have remained, and even many applications from science fiction have been realized.

In the past year, Nicholas Carlini spent several hours each week interacting with LLMs (which might be more time than many men spend with their wives). He feels that thanks to these models, his code-writing speed has increased by at least 50%, whether in research projects or personal projects.

He categorizes the use of AI into two main types: "helping me learn" and "automating boring tasks."

Below are some specific examples of how Nicholas Carlini uses AI:

  • Building complete web applications using technologies he has never touched before.
  • Learning how to use various frameworks without having used them before.
  • Converting dozens of programs into C or Rust to improve performance by 10 to 100 times.
  • Streamlining large codebases, significantly simplifying projects.
  • Writing initial experimental code for almost every research paper he wrote last year.
  • Automating almost all monotonous tasks or one-off scripts.
  • Almost completely replacing web searches when setting up and configuring new software packages or projects.
  • When debugging error messages, no longer relying on web searches about 50% of the time.

In the next few days, I can gradually provide various details that Nicholas offers regarding these cases.

As Nicholas said:

LLMs cannot do everything, nor can they do most things. But the models that currently exist already provide us with considerable value.

Although many things could be accomplished by letting an undergraduate intern spend a few hours researching or working hard, it's not practical to hire a large number of interns to do these things, whereas we have access to a large number of LLMs. Five years ago, the best LLMs could only produce paragraphs that looked like English, and their practical uses were almost zero. Today, Nicholas can already let LLMs handle 50% of his work. Perhaps, in five years, there will be even more changes, and it's unclear whether this will excite or frighten people.