By Ken Goldberg, Professor of Industrial Engineering and Operations Research and William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley


Engaging with AI’s unique form of creativity could lead to unexpected new discoveries. (Image by Champ Panupong Techawongthawon, via Unsplash)

Imagine you’re practicing with a band and someone walks in with a new instrument. You and your pals will probably want to check it out, see how you might jam with it. You probably wouldn’t jump to the conclusion that your guitar is obsolete and this is the end of music.


Yet this is how we’re reacting to the latest innovation in artificial intelligence. We grew up with Frankenstein, HAL 9000, Her, and similar stories where artificial intelligence runs amok. That archetype is deeply ingrained but the reality is that humans have adapted to every single technological innovation. Yes, there are always downsides to technology, but none of them have led to the end of civilization. So let’s take a collective breath and resist AI xenophobia.


I’ve been a skeptic about AI for over 40 years. By day, I’m director of a robotics lab at the University of California, Berkeley, but I’m also an artist. I’ve always said that AI would never be creative; it wouldn’t come up with an interesting work of art, an interesting invention, or a funny joke. But just after Thanksgiving, the awkwardly named ChatGPT went online. After using it for a few hours I thought: “What if I’ve been wrong?”


ChatGPT and other generative models are a significant advance in AI. The scientists and artists I respect most are approaching the latest advance in AI with curiosity, not fear. What new inventions can it generate? What new songs can it write? What can we learn from it and how can we collaborate with it? I’m not worried that AI will steal our jobs, undermine the economy, or spawn an unintended global Armageddon. And I’m not worried about making mistakes: all scientists make conjectures and all artists take poetic license.


I didn’t sign the petition to pause AI because its creative potential is vastly more interesting than crisis messaging. Novelty and diversity in backgrounds, abilities, talents, and perspectives have always been crucial to discovering new inventions and art forms. The world thrives on coexistence, not a “winner-take-all” Darwinian totalitarianism.


Six hundred years ago, almost everyone believed that humans were at the center of the universe, that the sun, moon, and stars revolved around Earth. When Galileo figured out the formula for grinding lenses, he looked up and saw moons revolving around Jupiter. Everything didn’t revolve around us. The Catholic Church excommunicated him, but the Copernican revolution prevailed and humans accepted that Earth is not the center of the universe.


The same math behind the telescope led to the microscope and the discovery of human cells and microbes — entire worlds lurking right under our noses. These discoveries were so surprising that people started questioning everything they thought they knew. René Descartes applied scientific skepticism to question his own existence. He found one thing that cannot be denied: “cogito ergo sum”: I think therefore I am.


The human mind became the measure of all things; our remarkable ability to reason systematically — the scientific method. The “Enlightenment” took off and produced centuries of discoveries and innovations, many that appeared at first magical, from electricity to photography, from the airplane to the spaceship, from the double-helix to the cellphone. These technologies changed how we viewed ourselves and the world around us. They enhanced our confidence in our own scientific power.


Now a new revolution is brewing. Recent advances in AI are forcing us to ask: What if humankind’s ability to reason, to think, can be performed by AI? For some kinds of thinking, like recognizing speech, identifying items in photographs, and playing Go, AI delighted us. Those advances were impressive, but for the most part they addressed relatively narrow problems.



Read Full Article


Story Source

Original article written by Ken Goldberg for the Boston Globe