But I wonder what happens what it get access to the internet ...
Probably the same thing as what has happened to humans (which I already have some good proof of)... it'll come up with incorrect answers because it will make poor decisions on what may or may not be true and produce confident language in its explanation, just like humans do, to justify its findings... even though it just produced an incorrect answers.
I asked ChatGPT how to write T-SQL to count from 1 to a million. After several frustrating answers that used recursive CTEs and a couple of other things, I asked it the following question and you can see the answer it gives. Notice the answer and then notice that I very specifically asked how to do it without using a recursive CTE. I'm still working on my first cup of coffee this morning but I'm pretty sure that's a recursive CTE in the answer.
Then I asked it the following question... it's not like Itzik Ben-Gan started programming after 2021... and I've never seen Itzik use the word "Tally" in any of his code.
I you look at the wording in the text part of the answer, you can see that if falls into the category of "Absolute Bullshit Grinder" To use someone else's (ZZartin gets the credit there) phrase, it's "Confidently Incorrect", just like many humans and it's seems even "better" than humans at that.
As for SkyNet being fiction, it is right now. You've gotta remember how SkyNet got started in that movie. The actual events are fictional... the possibility is not.