LLMs are accelrating research because they are good at:
Literature search, especially across disciplinary boundaries
Generating and checking routine calculations
Proposing variations on known techniques
Identifying connections between disparate results
Producing first-draft code for well-specified problems
Explaining why certain approaches won't work
But they're curently struggling with the following - though it's a shrinking space
Genuinely novel conceptual leaps (but this is increasingly happening, e.g. Sawhney and Sellke's problem )
Recognizing when it's plagiarizing, e.g. when it "discovered" a proof for the Chevalley-Warning theorem which was copied from a Noga Alon paper - it wasn't conscious of this
Knowing what it doesn't know
Distinguishing important problems from unimportant ones
Understanding the "negative space" of mathematics (why certain problems are hard, why obvious approaches fail)