On Using LLMs in Research

Started: 26 Apr 2026
Updated: 26 Apr 2026

I started using LLMs more frequently at the beginning of the year, when they began to feel useful for answering technical questions. I only started using them seriously in my research around March. By then, they had become so good at programming that delegating some of that work to machines felt economical rather than merely convenient.

For programming, LLMs are useful for several reasons.

For learning science (physics/chemistry/mathematics), the barrier to entry has become much lower.

Literature review has also become easier.

The goal of ARC-AGI is a promising step toward a more general problem solver: one capable not only of recognizing patterns, but of analyzing and generating new abstract concepts [4], [5].

References

  1. Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity. Microsoft Research.
  2. Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024). The AI Scientist. arXiv.
  3. Azamfirei, R., Kudchadkar, S. R., & Fackler, J. (2023). Large language models and the perils of their hallucinations. Critical Care, 27, 120.
  4. Chollet, F. (2019). On the Measure of Intelligence. arXiv.
  5. ARC Prize. ARC-AGI-1.
  6. Hao, Q., Xu, F., Li, Y., & Evans, J. (2026). Artificial intelligence tools expand scientists’ impact but contract science’s focus. Nature, 649(8099), 1237-1243.