Artificial Intelligence Study Examines the Future of Computers

Visual computing technology is at the core of the computers human use in the daily lives. This type of technology helps them see and process information to understand what they see and respond accordingly. What if the computers that understand what they see know more than the computers that know what they see, though? In order to continue to improve computer vision, computer scientists are now attempting to apply computer vision to artificial intelligence and analyze what the technology knows and what it doesn’t know.

According to Kyle Whitmire, a computer science researcher with Cornell’s Applied Intelligence Laboratory, computer vision is an advanced form of computing. “While there are a large number of computer vision applications, they often do not mean that the visual data is being processed by a specific sequence of calculations. In the simplest form, a visual dataset can be converted to a mathematical model that is computationally reducible using a number of mathematic operations. This generates a reasonably simple representation of the visual data,” said Whitmire.

Computer vision can be used for one of two purposes: computer vision can be used to analyze visual data such as photographs to “teach” computers to understand what they see, or computer vision can be used to work as a computer system to solve problems, which can be applied to both of these approaches to create better computer vision.

“By teaching computers to recognize patterns in images, we can make computers more capable of detecting anomalies in data that, if not noticed and ‘addressed,’ could affect the outcomes of tasks that we perform in real-time and on a daily basis,” Whitmire said. Predictive modeling, where the computer predicts things about the world, can directly benefit from computer vision. Imagine that we are learning how to see a forest. If we start with a detailed representation of the forest, for example, as we learn more about our surroundings, the computer will learn to ‘see’ in ways that are better suited to the task at hand.

Leveraging computer vision to improve computer vision will require combining best computational approaches to image recognition with techniques that allow computers to leverage large amounts of data to solve complex problems. If neural networks can teach the computer to handle many more nodes with much greater accuracy, systems will be more capable of handling even more types of problems.

It might seem counterintuitive to think that more intelligence in a computer can bring more chaos into the real world, but computer vision is actually advancing at a pace that will eventually converge with automation. When human forefathers, the idealists, reached the Industrial Revolution, they didn’t do it alone. There were dozens of organizations, from Congress to the coal mines, that played a significant role in building and fostering innovation.

We don’t know yet how these challenges will be solved, but since the algorithms behind computer vision are a close cousin to the algorithms that power natural language processing in applications like Siri, Google Maps, and Skype, I am eager to see the collaborations happen. I think that a swarm of machine learning specialists will find interesting ways to combine their expertise with other academic experts and engineers to aid in the development of machine vision.

Using pictures to control tiny robots might seem like the type of crazy research that would produce a certain amount of media interest, but the team has been solving deep problems in the field for over five years. What the team is really doing is mimicking a process that’s been used successfully for 50 years. The team is using computer vision to find patterns in otherwise random sets of data, something that has been done successfully for more than a decade in a collaborative effort with Steve Wozniak.

With its massive amounts of processing power, it’s a little surprising that a computer doesn’t already do a great deal of natural language processing. Due to this, adaptive maps are now being developed for use by such providers as Microsoft. The most recent product of this effort was constructed in 2017 and attempts to create a maps recognition algorithm that is 85 percent accurate, a feat that probably no human could achieve by itself.

Source: Cornell

© 2021 Speculative AI Blog • Crafted with ❤️ by AI & Yundan