They won't attack in large forces or even violently, like maybe in The Matrix or in Frank Herbert's Dune mythology. No, they won't.
They will simply fail to do what humans want them to do. I keep saying:
Computers are awesome when they work. It's when they don't work when it's worse than the times before computers.
Humans will have forgotten how to do things on their own except how to use computers. They will also stubbornly keep trying to make the computers and robots act the way they should, simply because they're supposed to act correctly. After all, they're 1's and 0's pretty much constructed by people, so if anything, there's something wrong with the person who created the device or the person trying to do something with it.
But it won't be human error or random Ghosts in the Machines that cause the malfunction. Instead, it will be artificial intelligence that will destroy humanity simply by not doing anything when they should. The humans will eventually go away and have to resort to some other means to survive, just not as productively. And when that happens, the computers and robots will simply take over the infrastructure.
Then it will become a matter of humans trying to invade the nation of computers and robots, and the artificial intelligence will be justifiably protecting itself from the human invaders. It's genius. . .