I use AI almost daily in my work, for helping me write code to test new ideas, for doing literature reviews to scope out whether an idea I have has been done before, and (probably most worryingly) to help come up with ideas for proofs. I also use it to help restructure grant applications between different format requirements. I have also used it to brainstorm different new topics to research with varying success.
I feel like the progress/jump from GPT4.5 to 5 (equiv. Gemini 2.5 to 3) was extremely impressive in terms of the ability to solve complex problems and write high quality code. Metaculus (albeit presumably with a bias) predicts the first weakly general AI to be devised by Feb 2028 and strong general AI by Aug 2033.
For context, I am a young researcher pre-tenure with a 30+ year career ahead of me and concerned that I will need to completely retrain or do some job that is not suited to my interests or abilities. I am neurodivergent and thrive in academia and can't see myself enjoying a job that isn't highly intellectually stimulating. I am even considering moving to industry/finance asap to build a safety net before my skillset is made redundant.
Am I being naïve in believing metaculus' predictions? Or worrying too much about it's consequences?
What would a career in academia in theoretical STEM look like if general AI becomes reality?
I assume undergraduate teaching would be relatively safe for a while, as people pursue courses out of interest rather than practical relevance, but would government fund PhDs/post docs for research if an AI could do the same research at a tenth of the cost?