Artificial Intelligence (AI) is intelligence exhibited by machines. AI includes – to name a few technologies – machine learning, natural language processing, speech recognition, and data mining. The term AI is often used when a machine mimics cognitive functions that are associated with humans, such as learning and problem-solving. In this definition Artificial intelligence is a technology that attempts to emulate human performance by learning, drawing its own conclusions, understanding complex content, engaging in natural dialogues with humans, improving human cognitive performance, or replacing people in performing non-routine tasks. The Turing Test is a widely accepted test for AI (Artificial Intelligence), formulated by English mathematician Alan Turing. This test states that a computer is intelligent if it fools someone into believing that it is a human being.
Today’s AI (Artificial Intelligence) includes narrow (or weak) AI. This AI is called narrow or weak because it is designed to perform a narrow task (for example face detection, searching the internet, or driving a car). However, the long-term goal of many researchers is to create strong (or full) AI. Narrow AI could surpass people on a specific task, such as playing chess or solving equations. While general AI would perform better on (almost) any cognitive task. Machines that are as competent as humans are not yet made. But current machines are getting better within specific applications.
The core problems of AI (Artificial Intelligence) involve the programming of computers for certain capabilities. Such as Knowledge, reasoning, problem-solving, perception, learning, planning, and manipulating and moving objects. As machines are becoming increasingly skilled, capabilities that were thought to require intelligence are removed from the definition. For example, optical character recognition (converting text from an image into editable text) is no longer an example of Artificial Intelligence. This technique is now routinely applied. Capabilities currently classified as AI include: Successful understanding of human speech, competing at a high level in strategic games (such as chess and Go), self-driving cars, and interpreting complex data
In science fiction AI (Artificial Intelligence) is often portrayed as robots with human attributes. But AI can include many more: Google’s search algorithms, Siri, Watson, self-driven cars, and autonomous weapons. Future applications coming soon are self-propelled cars and speech recognition. Progress in natural language processing is developing at such a fast pace that many people already plan a meeting via a CC to their personal AI assistant. IBM Watson is able to deliver medical diagnoses in less than 10 minutes. Students get training to fight hackers. And AI-targeted venture capital investments are made daily. Large companies strive to integrate AI (Artificial Intelligence) into their products. All these statements emphasize the enormous amount of time, resources, and capital inside the field.
Computer scientists have been working on the development of AI (Artificial Intelligence) for decades. The reason we suddenly hear so much about AI is because powerful computers have become very cheap and small. In addition, computer scientists are able to develop programs that make better use of this. The major improvements in AI software and the fact that AI is making its way to more applications provide a commercial reason to invest in special chips running at lower power. This allows for more AI applications. Thus, AI more and more becomes an everyday, helpful technology.