I agree with you about novels. (I would say that, wouldn't I?) But I'm not convinced, either, by Amodei's idea that AIs could do the work of science, or at least the genuinely pathfinding, breakthrough-making part of science. Doing pattern-recognition on a more-than-human scale, yes; systematically exploring vast possibility spaces we couldn't get into before, yes. But those are both ways in which AI can tease out unnoticed properties of data we've already got, things we already know (but don't know we know, etc). I like Henry Farrell's characterisation of AI as a social technology of knowledge, a way of conveniently indexing and making available a digest of what's already thought. (Errors included.) What that won't do, since it isn't a matter of probabilistic links between words and other symbols, is give us things that haven't been thought before. (Though it may show us things
*implicit* in existing thinking that had been hiding in plain sight.) To me, the danger here is not one of AGI taking over the world, or any of that shit, but of AI hopelessly disrupting the models of apprenticeship and human effort without which you can't get to the genuinely new stuff. Without the effortful mastery of the state of the art as it presently exists in any domain, you can't put yourself in a position to do the genuinely original next thing. Why would you spend years laboriously becoming mediocre, as a necessary stage on the way to becoming good, if AI will do it for you, frictionlessly? I think widespread adoption of AI is likely to be a recipe for human deskilling, and therefore for cultural stagnation.
I quite agree with your larger point about AI making it redundant for us to master mediocrity -- although isn't that just an acceleration of what university administrators have been working on for decades?
But I think that Amodei has a slightly stronger point about science than you acknowledge: he's building on a claim he footnotes from Sydney Brenner — “Progress in science depends on new techniques, new discoveries and new ideas, probably in that order.” It is these new techniques, such as CRISPR and PCR, that he believes AI will find more quickly. And in both those cases, what was discovered was not a grand theory but a way to recombine things already known. That seems well within the capacity of a sufficiently well trained AI. So I don't think you'll get an AI Einstein (or Brenner, for that matter) but you might well get a Jennifer Doudna or Kary Mullis. Mullis credited his discovery to [his use of LSD](https://en.wikipedia.org/wiki/Polymerase_chain_reaction#History) which seems to me a process that has something in common with the way in which AIs can jumble and reconstitute fragments of knowledge torn out of their original frames.
You say: "widespread adoption of AI is likely to be a recipe for human deskilling, and therefore for cultural stagnation," yet: "the danger here is not one of AGI taking over the world, or any of that shit...." If we are so deskilled that we must rely on AI for our culture (and our healthcare, and for waging our wars, and our governance, etc.) then what is left to hope for if not for "AGI taking over the world," or some such shit? The Second Coming, perhaps? Personally, and putting it as diplomatically as I can, I think you and Amodei and just about everyone except Kurzweil are willfully and woefully ignoring the probability of the emergence of consciousness in the AI-minded machine.
I agree with you about novels. (I would say that, wouldn't I?) But I'm not convinced, either, by Amodei's idea that AIs could do the work of science, or at least the genuinely pathfinding, breakthrough-making part of science. Doing pattern-recognition on a more-than-human scale, yes; systematically exploring vast possibility spaces we couldn't get into before, yes. But those are both ways in which AI can tease out unnoticed properties of data we've already got, things we already know (but don't know we know, etc). I like Henry Farrell's characterisation of AI as a social technology of knowledge, a way of conveniently indexing and making available a digest of what's already thought. (Errors included.) What that won't do, since it isn't a matter of probabilistic links between words and other symbols, is give us things that haven't been thought before. (Though it may show us things
*implicit* in existing thinking that had been hiding in plain sight.) To me, the danger here is not one of AGI taking over the world, or any of that shit, but of AI hopelessly disrupting the models of apprenticeship and human effort without which you can't get to the genuinely new stuff. Without the effortful mastery of the state of the art as it presently exists in any domain, you can't put yourself in a position to do the genuinely original next thing. Why would you spend years laboriously becoming mediocre, as a necessary stage on the way to becoming good, if AI will do it for you, frictionlessly? I think widespread adoption of AI is likely to be a recipe for human deskilling, and therefore for cultural stagnation.
I quite agree with your larger point about AI making it redundant for us to master mediocrity -- although isn't that just an acceleration of what university administrators have been working on for decades?
But I think that Amodei has a slightly stronger point about science than you acknowledge: he's building on a claim he footnotes from Sydney Brenner — “Progress in science depends on new techniques, new discoveries and new ideas, probably in that order.” It is these new techniques, such as CRISPR and PCR, that he believes AI will find more quickly. And in both those cases, what was discovered was not a grand theory but a way to recombine things already known. That seems well within the capacity of a sufficiently well trained AI. So I don't think you'll get an AI Einstein (or Brenner, for that matter) but you might well get a Jennifer Doudna or Kary Mullis. Mullis credited his discovery to [his use of LSD](https://en.wikipedia.org/wiki/Polymerase_chain_reaction#History) which seems to me a process that has something in common with the way in which AIs can jumble and reconstitute fragments of knowledge torn out of their original frames.
You say: "widespread adoption of AI is likely to be a recipe for human deskilling, and therefore for cultural stagnation," yet: "the danger here is not one of AGI taking over the world, or any of that shit...." If we are so deskilled that we must rely on AI for our culture (and our healthcare, and for waging our wars, and our governance, etc.) then what is left to hope for if not for "AGI taking over the world," or some such shit? The Second Coming, perhaps? Personally, and putting it as diplomatically as I can, I think you and Amodei and just about everyone except Kurzweil are willfully and woefully ignoring the probability of the emergence of consciousness in the AI-minded machine.