“This is a profound moment in the history of technology,” says Mustafa Suleyman.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.::DeepMind cofounder Mustafa Suleyman wants to build a chatbot that does a whole lot more than chat. In a recent conversation I had with him, he told me that generative AI is just a phase. What’s next is interactive AI: bots that can carry out tasks you set for them by calling on other software…
You don't need a scientific genius to deduce that - even I did it! One of the most impressive things for me about ChatGPT has been the ability to "undestand" what you mean, properly communicate with you. For the time being it's not hooked up to anything but it shouldn't be too hard to make it translate our natural language requests (which it already "understands") into software commands. The possibilities are endless.
Meaning what? It needs Cartesian Dualist qualia floating around between its wires and transistors, or else it's just a word vending machine? What's the demonstrable test for understanding vs "understanding"?
That's just a toolbox and in my experience a pretty limited one as well. What OP means is that Gen AI doesn't connect to your Emails, Photoshop, your IDE, your browser, and what not with text or speech.
Imagine not using your keyboard and mouse anymore, but only using your speech and natural language for everything (not commands, but natural language).
Confidently interfacing with smart glasses would be a game changer for so many things.
Right, I'll trust a complex AI to take charge of my other apps.
"I want to send a text to my mother"
"Autogenerated sexting message sent"
"WAIT NO"
The tech enthusiast in me likes the idea. The IT professional however is very sceptical of trusting software to that extent.
Hell, I feel a sting of uncertainty every time I use inter-app interfaces on Android. Sure, I know how it's supposed to work, and often enough it does, but the error rate and fragmentation of standards are still too high for me to have enough faith that somehow an AI would circumvent that. We see purpose-built machines like Tesla's autopilot fail dramatically, much less an ambitious multi-function tool.
The above example may be strongly exaggerated, but the wealth of side effects and weird interactions between different human-made and thus typically inherently flawed tools concerns me. It's hard, probably even impossible, to predict all the potential mishaps.
I want to believe and I hope we'll reach a level of maturity and QA standards where I can trust it. I like the idea. I'm an excited pessimist who would like nothing more than to be wrong.
Please no, this is incredibly dangerous. They didn't stop at giving people AI which gave developers incredibly untrusted and deceptive code. Now they want to run this code without oversight.
People are going to be rm -rf /* by the AI and will only then understand how stupid of an idea this is.